Anthropic's Claude AI chatbot hallucinated false academic citations that were used by the company's lawyers in a copyright lawsuit, requiring the law firm to admit the error to the court.
In May 2025, Anthropic admitted to a Northern California federal court that its Claude AI chatbot had generated erroneous citations used in their legal defense against music publishers Universal Music Group, Concord, and ABKCO. The incident occurred when Anthropic's expert witness Olivia Chen cited a legitimate academic journal article, but attorney Ivana Dukanovic from Latham & Watkins used Claude to create the citation formatting. Claude hallucinated an inaccurate title and incorrect authors while providing the correct publication year and link. The opposing counsel accused Chen of using AI-fabricated sources, prompting U.S. Magistrate Judge Susan van Keulen to order Anthropic to respond to these allegations. Dukanovic admitted the error was an 'embarrassing and unintentional mistake' and that their manual citation check failed to catch the hallucination. The law firm implemented additional review processes to prevent future occurrences. This incident is part of a broader pattern of lawyers experiencing problems with AI-generated legal citations, with similar cases reported in California and Australia involving ChatGPT producing faulty citations.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.