Colorado attorney Zachariah Crabill used ChatGPT to research case law for a legal motion, but the AI generated multiple fake lawsuit citations that he filed with the court, resulting in his motion being denied, a complaint filed against him, his termination from the law firm, and a one-year suspension from practicing law.
In May 2023, Zachariah Crabill, a Colorado Springs attorney with about 1.5 years of experience, used OpenAI's ChatGPT to help research case law for his first civil litigation motion to set aside a summary judgment. Crabill was defending a client accused of breaching a car payment agreement and turned to the AI chatbot to expedite the time-intensive legal research process. Initially, ChatGPT provided accurate responses about Colorado laws, leading Crabill to trust the technology. However, when he asked for case citations to support his motion, ChatGPT generated dozens of fake cases that appeared realistic but did not actually exist. For example, one citation was 'Gonzales v. Allstate Ins. Co.' from 2014, but the real case was from 2002 and involved different facts. Crabill filed the motion without verifying the citations through traditional legal databases like LexisNexis. During the court hearing, the judge could not locate the cited cases and denied the motion due to the false citations. The judge then reported Crabill to the Office of Attorney Regulations. Crabill was subsequently fired from Baker Law Group in July 2023, and later received a one-year suspension from practicing law, though he could reduce this to 90 days plus two years probation if he completes the requirements successfully.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed