The AI Incident Database implemented machine translation to support 133 languages, acknowledging this would inevitably produce translation errors and offensive content, as demonstrated by Google Lens mistranslating a Korean book title to an obscene phrase.
The AI Incident Database, operated by the Responsible AI Collaborative, implemented machine translation capabilities to support incident reporting in 133 languages while providing user interface support for only English and Spanish. The organization acknowledged that machine translation systems regularly produce offensive and sometimes dangerous incidents, citing their own testing between Spanish and English that found translations to be interpretable but awkward and inconsistent. A specific example was provided where Google Lens camera-based translation feature mistranslated a Korean book title from 'that, that' (meaning 'on the tip of my tongue') to an offensive sexual phrase 'dick sucker' when translating the cover of a book by Korea's first minister of culture. The mistranslation occurred because the Korean word 'that' can be slang for male genitalia, and without proper context understanding that this was a serious book title, the AI system selected the most offensive interpretation likely found in internet message boards. The organization implemented five best practices to mitigate harm: identifying machine-translated content in the user interface, providing links to untranslated source text, enabling reporting and correction of bad translations, validating translation effectiveness before general availability, and developing a community to interpret and respond to translation issues.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed