Opera's AI chatbot Aria falsely accused several respected photographers of committing war crimes when asked about conflict documentation.
On May 24th, Opera, a tech company based in Oslo, Norway and owned by Chinese billionaire Zhou Yahui, released a new browser integrating an AI chatbot called Aria. The chatbot is based on OpenAI's GPT technology and enhanced with live web results. When tested, the AI system generated false accusations against respected photographers including Lynsey Addario, James Nachtwey, Ron Haviv, Lee Miller and Larry Towell, falsely claiming they committed war crimes. The system also incorrectly labeled military photographers Raymond D'Addario and Ronald L. Haeberle as war criminals. These photographers have documented conflicts at great personal risk for the public record, not committed war crimes. The incident demonstrates the AI system's inability to fact-check its outputs despite appearing to communicate in human-like language. The report characterizes the AI product as 'defective and dangerous' due to its generation of slanderous misinformation about real individuals.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed