A litigant in person presented fictitious legal case citations to a Manchester court that were generated by ChatGPT, including fabricated case names and completely false quoted passages that appeared legitimate.
A civil case was heard in Manchester involving one represented party and one unrepresented litigant in person (LiP). When proceedings ended with the barrister arguing there was no precedent for the case being advanced, the LiP returned the following day with four case citations supporting their argument. Upon inspection by the barrister, one case name was completely fabricated while the other three were real cases but with entirely different cited passages than what appeared in the actual judgments. All four citations contained completely fictitious quoted paragraphs that appeared legitimate. When questioned by the judge, the litigant admitted they had asked ChatGPT to find cases that could prove their argument. The AI chatbot appears to have accessed case names and created fake excerpts that responded to the question. The judge accepted the misleading submissions were inadvertent and did not penalize the litigant. The incident highlights potential AI influence in court proceedings, particularly when parties are unrepresented.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed