AI-generated fake images of the Hollywood Sign on fire spread widely on social media during real Los Angeles wildfires, causing distress to evacuees and spreading misinformation about the landmark's safety.
During deadly wildfires that broke out around Los Angeles on January 8, 2025, including the Sunset Fire in the Hollywood Hills, fake AI-generated images showing the Hollywood Sign burning spread across social media platforms. The actual Hollywood Sign was not affected by the fires and remained secure according to the Hollywood Sign Trust chair. Some of the viral fake images contained watermarks from X's AI chatbot Grok, which was capable of generating similar burning Hollywood Sign images when prompted. The false images caused distress among people who were already evacuated from their homes due to the real fires, with many posting complaints about the harmful misinformation. Social media platforms added context warnings to some posts clarifying that the images were fake. The Sunset Fire, which was the closest to the Hollywood Sign, burned 43 acres but was contained by firefighters and did not reach the landmark, which is located about two miles away across the 101 Freeway. At least six people died in the broader Southern California wildfires, which forced nearly 180,000 people to evacuate and left more than 1.5 million without power. X and Meta platforms both added fact-checking context to posts containing the fake images.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed