A lawyer used AI to draft a legal brief containing 21 instances of fabricated case citations and misrepresentations, leading to a $2,500 sanction from a federal appeals court.
In February 2026, the 5th U.S. Circuit Court of Appeals sanctioned attorney Heather Hersh of FCRA Attorneys $2,500 for submitting a legal brief containing AI-generated fictitious content. The brief was filed as part of an appeal regarding sanctions against attorney Shawn Jaffer and his law firm in a Fair Credit Reporting Act lawsuit. The court identified 21 instances of fabricated quotations and serious misrepresentations of law or fact in Hersh's brief. When confronted, Hersh initially claimed she relied on publicly available case versions and blamed legal databases for inaccuracies, only admitting to using AI when directly asked. Judge Jennifer Walker Elrod called Hersh's response 'not credible' and 'misleading,' noting that had she been more forthcoming, lesser sanctions would have been imposed. The court expressed frustration that AI-hallucinated case citations continue to be a growing problem in courts, citing a database that lists 239 cases of AI-generated hallucinations in U.S. legal filings as of the incident date. The 5th Circuit had previously considered adopting specific rules for AI use by lawyers but decided existing professional conduct rules were sufficient.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
Human
Due to a decision or action made by humans
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed