Three lawyers at Butler Snow law firm used ChatGPT to generate fabricated legal case citations in court filings defending Alabama prison officials, resulting in judicial sanctions and disqualification from the case.
In 2021, Frankie Johnson filed a lawsuit against Alabama prison officials for failing to protect him from repeated stabbings while incarcerated. The Alabama attorney general's office hired Butler Snow law firm to defend the case, paying the firm millions of dollars. In May 2024, attorney Matthew Reeves at Butler Snow used ChatGPT to generate legal citations for court filings related to scheduling disputes over Johnson's deposition. The AI generated five completely fabricated case citations that either did not exist or were irrelevant to the legal issues. For example, one citation referenced 'Kelley v City of Birmingham' from 2021, but the only actual case with that name was decided in 1939 regarding a speeding ticket. When opposing counsel discovered the false citations and filed a motion pointing out the fabrications, U.S. District Judge Anna Manasco scheduled a hearing to determine sanctions. Reeves admitted he violated firm policy by using ChatGPT without verification and failed to check the citations through legal databases like Westlaw or Pacer before filing. Judge Manasco ultimately disqualified three Butler Snow lawyers from the case, publicly reprimanded them, and referred them to the Alabama State Bar for disciplinary action. This incident is part of a growing trend, with a database tracking 106 instances globally where courts have found AI hallucinations in legal documents.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
Human
Due to a decision or action made by humans
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed