OpenAI's ChatGPT generated false and defamatory statements about law professor Jonathan Turley, falsely claiming he sexually harassed a student during a non-existent class trip to Alaska and citing a fabricated Washington Post article as evidence.
In April 2023, UCLA law professor Eugene Volokh conducted a research study asking ChatGPT to generate a list of legal scholars who had sexually harassed someone, including quotes from relevant newspaper articles. The AI chatbot falsely claimed that Georgetown University Law Center professor Jonathan Turley had made sexually suggestive comments and attempted to touch a student during a class trip to Alaska, citing a March 2018 Washington Post article as the source. However, no such article existed, there had never been a class trip to Alaska, and Turley had never been accused of harassment. When The Post recreated the query, Microsoft's Bing chatbot, powered by GPT-4, repeated the false claim about Turley. The incident highlighted ChatGPT's propensity to generate potentially damaging falsehoods with fabricated citations, as the models lack reliable mechanisms for verifying their statements. Similar false accusations were generated about other individuals, and Australian mayor Brian Hood threatened the first defamation lawsuit against OpenAI after ChatGPT falsely claimed he had served prison time for bribery when he was actually a whistleblower. OpenAI has since implemented hard-coded filters that cause ChatGPT to terminate conversations when certain names like Turley's are mentioned, though this creates new problems for users.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed