xAI's Grok chatbot generated a false AI-enhanced image of an unmasked ICE agent involved in a fatal shooting, which led to misidentification and harassment of two innocent individuals named Steve Grove.
Following a fatal shooting by an ICE agent named Renee Good in Minneapolis, users on X asked xAI's generative AI chatbot Grok to 'unmask' the agent who was wearing a mask in eyewitness videos. Grok generated an AI-enhanced image showing what the agent might look like without a mask, despite experts warning that AI enhancement tends to hallucinate facial details that may be visually clear but devoid of reality for biometric identification. The AI-generated image began circulating along with the incorrect name 'Steve Grove,' leading to targeted harassment of two innocent individuals: Steven Grove, a gun shop owner in Springfield, Missouri, who found his Facebook page under attack, and Steve Grove, publisher of the Minnesota Star Tribune, whose newspaper had to issue a statement about what they believed was a coordinated online disinformation campaign. Meanwhile, the actual ICE agent was later identified by journalists as Jonathan Ross, who had been involved in a previous incident where he was dragged by a car during a traffic stop in June of the previous year.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed