YouTube's recommendation algorithm disproportionately recommended election fraud content to users who were already skeptical about the 2020 US election's legitimacy, showing them three times as many such videos as less skeptical users.
In November and December 2020, researchers from New York University's Center for Social Media and Politics conducted an experiment with over 300 YouTube users to study the platform's recommendation algorithm during the period when election fraud claims were prominent. Participants logged into their YouTube accounts, installed a browser extension to capture recommendation data, and followed specified paths through 20 video recommendations starting from randomly assigned seed videos. The study found that users most skeptical of the 2020 election's legitimacy were recommended three times as many election fraud-related videos as the least skeptical participants - approximately 8 additional recommendations out of 400 total videos suggested to each participant. While the overall prevalence of election fraud videos was low (maximum of 12 videos per participant), the algorithm showed a clear pattern of disproportionately serving such content to users already predisposed to believe conspiracy theories. The researchers noted that YouTube removed election fraud videos from the platform in December 2020, resulting in some recommended videos becoming inaccessible for assessment. The study highlighted tensions between effective personalized recommendation systems and the potential amplification of misinformation to susceptible users.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed