Three individuals in China used AI software to generate false news articles about natural disasters and emergencies, spreading misinformation on social media platforms to gain traffic and financial rewards.
Between January and June 2024, three separate incidents occurred in China where individuals used AI software to create and spread false information. In the first case on January 23, 2024, Yang used AI software to generate a fake news article claiming 'Yunnan landslide disaster killed 8 people' and published it on a network platform to gain traffic for profit. In the second case on June 9, 2024, Luo used AI software to generate false earthquake disaster images and spread misinformation about a 5.0 magnitude earthquake in Xide County, Sichuan Province, falsely claiming severe casualties and damage when the actual earthquake in Muli County caused no casualties. In the third case in June 2024, Tian used AI tools to create and publish false information about 'a woman's electric vehicle being impounded leading to sudden death' in Chengcheng County. All three individuals were seeking to attract attention and gain traffic rewards from platforms. The false information caused negative social impacts and disrupted public order. Police investigated all cases and imposed administrative penalties on the perpetrators, who confessed to using AI software to fabricate the rumors.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.