Google's AI tools Bard, Gemma, and Gemini falsely connected conservative activist Robby Starbuck to sexual assault claims and white nationalist Richard Spencer, prompting a $15 million defamation lawsuit.
Conservative activist Robby Starbuck filed a defamation lawsuit against Google in Delaware Superior Court seeking more than $15 million in damages. The incident began in 2023 when Starbuck discovered that Google's AI tool Bard falsely claimed he had ties to white nationalist Richard Spencer. Starbuck publicized this on social media platform X, tagging Google and its CEO. Earlier in 2024, newer Google AI tools including Gemma (an open AI model) and Gemini (a consumer-facing AI system) generated additional false claims about Starbuck, including assertions that he had been accused of sexual assault and participated in the January 6, 2021 Capitol riot. According to the lawsuit, Gemma listed false media links as sources for these claims. Starbuck's lawyer sent cease-and-desist letters to Google over the summer, but the company did not provide adequate engagement. Google acknowledged that inaccurate information is a well-known issue for large language models and stated they work to minimize such problems. This is Starbuck's second AI-related defamation lawsuit this year, having previously sued Meta for similar false claims, which was settled with an undisclosed agreement.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed