TikTok's recommendation algorithm systematically promoted false and misleading content about the Russia-Ukraine war to users, including both pro-Russian propaganda and other misinformation, within 40 minutes of creating new accounts.
TikTok, owned by Chinese company ByteDance, experienced widespread circulation of false and misleading content about the Russia-Ukraine war through its recommendation algorithm. The platform became a major source for war-related videos, with the #Ukrainewar hashtag amassing nearly 500 million views and some popular videos gaining close to one million likes. However, many videos contained unverified or false information, including footage from video games presented as real combat, audio from a 2020 Beirut explosion used in Ukraine war videos, and various forms of propaganda from both sides. NewsGuard's investigation found that new accounts were shown false or misleading content about the war within 40 minutes of joining the platform, with feeds becoming almost exclusively populated with war content that mixed disinformation with reliable sources without any distinction. The platform's algorithm and features, including audio reuse capabilities and the 'For You' page recommendation system, facilitated the rapid spread of this misinformation. TikTok implemented some measures including banning Russian state media outlets Sputnik and Russia Today in the EU, adding content labeling, and dedicating more resources to monitoring, but the misinformation continued to proliferate. Users like 19-year-old Bre Hernandez in Los Angeles believed they were seeing authentic war footage when they were actually viewing manipulated content, demonstrating how the platform's failures affected real users' understanding of current events.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed