A Stanford misinformation expert submitted a court declaration supporting Minnesota's deepfake law that contained multiple citations to non-existent academic studies, apparently generated by AI software like ChatGPT.
Professor Jeff Hancock, founding director of the Stanford Social Media Lab and expert on deception with technology, submitted an expert affidavit to support Minnesota's new law banning deepfake technology to influence elections. The law is being challenged in federal court by a conservative YouTuber and Republican state Rep. Mary Franson for violating First Amendment protections. Hancock's declaration cited numerous academic works, but several sources do not appear to exist. For instance, it cited a study titled 'The Influence of Deepfake Videos on Political Attitudes and Behavior' published in the Journal of Information Technology & Politics in 2023, but no such study exists in that journal or academic databases. The specific journal pages referenced contain entirely different articles. Legal experts identified these as AI 'hallucinations' likely generated by large language models like ChatGPT. Law professor Eugene Volokh found another non-existent citation to a study allegedly titled 'Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance.' It remains unclear whether the fake citations were inserted by Hancock, an assistant, or another party. Hancock's declaration concluded with a statement declaring under penalty of perjury that everything stated was true and correct. Neither Hancock, Stanford Social Media Lab, nor the Minnesota Attorney General's office responded to requests for comment.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed