A Palestinian construction worker was arrested by Israeli police after Facebook's automatic translation software incorrectly translated his Arabic 'good morning' post as 'attack them' in Hebrew and 'hurt them' in English.
A Palestinian construction worker posted a photo of himself on Facebook standing next to a bulldozer at his workplace in the Israeli West Bank settlement of Beitar Ilit near Jerusalem. He captioned the photo with 'good morning' in Arabic. Facebook's proprietary AI-powered automatic translation system mistranslated this innocent greeting as 'attack them' in Hebrew and 'hurt them' in English. Israeli police, who monitor Palestinian social media for potential security threats, were notified of the translated post and arrested the man on suspicion of incitement, believing he was threatening to use the bulldozer in a terrorist attack since bulldozers had been used in previous hit-and-run attacks. No Arabic-speaking officer reviewed the original post before the arrest was made. The man was questioned for several hours before police realized their mistake and released him. Facebook apologized for the translation error and stated they were taking steps to prevent similar incidents. The incident occurred in October 2017 and highlights the risks of relying on automated translation systems for security monitoring.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed