Lawyers at Morgan & Morgan law firm submitted court filings containing eight fabricated case citations generated by an AI platform, leading to sanctions and fines totaling $5,000 from a federal judge.
In January 2025, attorneys from Morgan & Morgan (America's largest injury law firm) and Goody Law Group filed motions in limine in a Wyoming federal court case involving a defective hoverboard lawsuit against Walmart. The filing cited nine legal cases, but eight of them were completely fabricated by artificial intelligence. Attorney Rudwin Ayala admitted he used his firm's internal AI platform called MX2.law to generate case law citations, uploading his draft brief with prompts like 'add to this Motion in Limine Federal Case law from Wyoming' and 'add more case law regarding motions in limine.' Without verifying the AI-generated citations, Ayala included them in court filings that were signed by three attorneys. Defense counsel discovered the fake citations when they could not locate the cases in legal databases, finding some only existed on ChatGPT. U.S. District Judge Kelly Rankin ordered the attorneys to show cause why they should not be sanctioned. On February 24, 2025, Judge Rankin imposed sanctions totaling $5,000: Ayala received a $3,000 fine and was removed from the case, while T. Michael Morgan and Taly Goody each received $1,000 fines. The attorneys promptly withdrew the erroneous motions, apologized to the court, and agreed to pay opposing counsel's legal fees. Morgan & Morgan implemented new AI training policies and added verification requirements to prevent future incidents.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed