ChatGPT falsely claimed that Australian mayor Brian Hood had been convicted of bribery and served prison time, when in fact he was a whistleblower who exposed the corruption.
Brian Hood, mayor of Hepburn Shire in Australia, discovered that OpenAI's ChatGPT was generating false information about him related to a bribery scandal involving Note Printing Australia, a subsidiary of the Reserve Bank of Australia in the early 2000s. While Hood did work for the company, he was actually the whistleblower who notified authorities about bribes paid to foreign officials for currency printing contracts and was never charged with any crime. However, ChatGPT falsely stated that Hood was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption, and been sentenced to prison. Hood's lawyers sent a letter of concern to OpenAI on March 21, giving them 28 days to correct the errors or face a defamation lawsuit. This would potentially be the first defamation case against ChatGPT. Australian defamation damages are generally capped around A$400,000, with Hood potentially claiming more than A$200,000 given the serious nature of the false statements. The case highlights AI systems' tendency to generate plausible but false information, what researchers call 'hallucinations,' where language models produce convincing text that is not factual.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed