AI-generated deepfake images depicting fake Hurricane Helene disaster scenes flooded social media platforms, spreading misinformation and complicating disaster response efforts.
Following Hurricane Helene's devastation across the Southeast in late 2024, AI-generated images depicting fake disaster scenes spread rapidly across social media platforms including X, Facebook, and other networks. These deepfake images included emotionally manipulative content such as a crying child holding a puppy in floodwaters, which contained telltale signs of AI generation like extra fingers and inconsistent details. The fake images were shared by thousands of users, including political figures like Senator Mike Lee who later deleted his post. Some images were used for political purposes to criticize government disaster response, while others appeared designed to generate engagement or potentially facilitate scams. The proliferation of these AI-generated images complicated legitimate disaster response efforts by emergency management agencies who rely on social media for situational awareness. FEMA was forced to create a 'Rumor Response' webpage to combat misinformation. The incident highlighted how AI-generated content can exploit emotional responses during crises and erode trust in legitimate news sources.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed