Islamist groups in Sudan used AI technology to create fake audio recordings and spread disinformation through sophisticated social media networks during the country's ongoing conflict, including fabricating statements by US officials to claim Western retreat and promote their own influence.
During Sudan's ongoing military conflict that began in 2023, Islamist groups deployed AI-generated content as part of sophisticated disinformation campaigns. The groups used AI to create fake audio recordings, including one purportedly from US Ambassador John Godfrey outlining strategies to impose secularism on Sudan. Multiple incidents occurred throughout 2023-2024, including AI-generated videos of former leader Omar al-Bashir that received hundreds of thousands of views on TikTok, and fabricated recordings attributed to military leaders ordering civilian killings. The AI-generated content was spread through extensive networks of coordinated social media accounts to exploit images of Western evacuations and claim Islamist victory. Some content creators acknowledged making satirical deepfakes that later morphed into disinformation. The fake content was shared by politicians, journalists, and even broadcast on national television before being removed. Fact-checking organization Beam Reports documented the trend and noted challenges in combating AI-generated disinformation amid the absence of on-ground reporting during the conflict.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed