AI tools including Google's Gemini were used to create misleadingly altered images of a police shooting incident in Minneapolis, spreading false information about the events on social media.
Following the shooting of Alex Pretti, a 37-year-old nurse, by federal immigration agents in Minneapolis, social media users created and spread misleadingly altered images using AI tools. One image was edited to depict Pretti pointing a gun at an agent when he was actually holding a phone. Another image was altered using Google's Gemini AI tool, supposedly to enhance and sharpen details about the scene. The AI-altered image included obvious errors, including changes to Pretti's face and removal of a gun from an agent's hand. These manipulated images were shared widely on social media platforms, contributing to misinformation about the incident. The alterations supported false narratives being promoted by pro-Trump influencers and the Trump administration, who were making unsubstantiated claims about Pretti's actions. Authenticated footage and witness accounts contradicted these false narratives, showing that Pretti had not drawn his weapon and was shot in the back after being disarmed.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed