Following the Air India flight crash in Ahmedabad that killed 275 people, AI-generated misinformation including fake investigation reports, manipulated videos of victims, and false imagery spread across social media platforms, causing additional distress to grieving families and aviation professionals.
Following the Air India AI171 flight crash in Ahmedabad on June 12 that killed 275 people with one survivor, AI-generated misinformation proliferated across social media platforms. Bad actors used AI to create and spread false content including a fake preliminary investigation report that mimicked genuine aviation terminology and appeared professional, though it was actually generated using details from a 2024 LATAM airlines incident. AI-generated visuals of passengers who died were created and shared, causing additional grief to families. One specific case involved Rajasthan-based teacher Kuldeep Bhatt, whose cousin Komi Vyas died in the crash, being confronted with an AI-generated video depicting her cremation created from a selfie she had sent from the plane. The video went viral on WhatsApp before the family had even identified her body. Digital fraud detection firm mFilterIT observed a disturbing pattern of bad actors leveraging AI and social media to spread misinformation during this sensitive event. Aviation professionals were also deceived by the sophisticated AI-generated content that appeared authentic.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed