A New York lawyer used ChatGPT to research legal cases for a court filing, but the AI chatbot fabricated six non-existent court cases with fake citations, quotes, and legal opinions, leading to potential sanctions against the attorney.
In May 2023, attorney Steven Schwartz of the law firm Levidow, Levidow & Oberman used OpenAI's ChatGPT to conduct legal research for a personal injury lawsuit filed by Roberto Mata against Avianca Airlines. Mata claimed he was injured when a metal serving cart struck his knee during a 2019 flight. When Avianca moved to dismiss the case, Schwartz submitted a 10-page brief citing more than half a dozen court cases to support his argument. However, opposing counsel and Judge P. Kevin Castel of the Southern District of New York discovered that six of the cited cases, including Varghese v. China Southern Airlines, Martinez v. Delta Airlines, and Miller v. United Airlines, were completely fabricated by ChatGPT. The fake cases included bogus quotes, internal citations, and detailed legal opinions that appeared authentic but were entirely fictitious. When questioned, ChatGPT assured Schwartz that the cases were real and could be found in legal databases like Westlaw and LexisNexis. Schwartz, who had 30 years of legal experience but had never used ChatGPT before, admitted he was unaware the AI could generate false information. Judge Castel called this an 'unprecedented circumstance' and ordered a June 8, 2023 hearing to consider sanctions against Schwartz and his colleague Peter LoDuca for submitting the fraudulent citations to the court.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed