AI-generated videos purporting to show a jewelry heist at the Louvre museum in Paris were created using OpenAI's Sora and spread widely on social media platforms, misleading viewers about a real crime.
Following a real jewelry heist at the Louvre museum in Paris on October 19, fabricated videos claiming to show footage of the theft were posted on social media platforms including Facebook, Douyin, and RedNote. The videos were created using OpenAI's Sora AI video generation tool and featured masked individuals breaking glass cases in what appeared to be museum galleries. A Hong Kong-based Facebook user with over 240,000 followers posted a reel containing these fake videos with Mandarin narration describing the heist. Digital forensics experts from AI Forensics identified clear signs of AI generation, including morphing hands, disappearing objects, and partially obscured Sora watermarks. Independent verification by comparing the videos to actual images of the Apollo Gallery on the Louvre's website revealed significant discrepancies in the gallery's appearance, roof structure, and artwork. The fabricated videos circulated widely across multiple social media platforms in the aftermath of the actual robbery, potentially misleading thousands of viewers about the nature and details of the real crime.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed