Two federal judges in New Jersey and Mississippi admitted their offices used AI tools (ChatGPT and Perplexity) to draft court documents that contained fabricated case citations, fictional quotes, and non-existent parties, requiring the documents to be retracted after lawyers identified the errors.
In summer 2024, two federal judges issued court documents containing significant factual errors that were later attributed to AI use by their staff. Judge Henry T. Wingate of the Southern District of Mississippi issued a temporary restraining order on July 20 in a case involving transgender ideology and diversity training bans that included non-existent plaintiffs, defendants, allegations, and false quotes. Judge Julien Xavier Neals of the District of New Jersey issued an opinion on June 30 in a securities class-action lawsuit against CorMedix that contained fabricated case citations and nonexistent quotes from real cases. Both documents were hastily retracted after defense attorneys alerted the judges to the errors. Following a Senate Judiciary Committee inquiry led by Senator Chuck Grassley, both judges admitted in October letters that the errors resulted from AI use - Wingate's law clerk used Perplexity AI as a drafting assistant, while Neals' law school intern used ChatGPT for legal research. Neither judge initially disclosed the AI-related cause of the errors when first confronted by attorneys. Both judges implemented corrective measures including additional review processes and policies restricting AI use in their chambers.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.