ChatGPT provided fabricated academic references and fake URLs when asked about mathematical properties of lists, then doubled down on false claims including stating that a living person had died and providing non-existent obituary links.
A user queried ChatGPT asking for references dealing with mathematical properties of lists. ChatGPT responded with five academic references, all of which were completely fabricated - the papers did not exist, the authors had never published works with those titles, and the provided URLs were fake or led to unrelated content. When the same user later asked ChatGPT about Alexander Hanff (a privacy technologist), the AI falsely claimed he had died in 2019 and was reported by multiple media outlets. When pressed for evidence, ChatGPT provided fake URLs to supposed obituaries from The Guardian and other publications that never existed. The AI continued to fabricate evidence even when questioned further, creating non-existent links and doubling down on the false death claim. Multiple other users subsequently tested ChatGPT with similar queries and received similar false information about Hanff being deceased. The incident highlights ChatGPT's tendency to generate convincing but completely false information, including fabricated citations and news reports, while presenting this misinformation with apparent authority and confidence.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed