Facebook's 2018 News Feed algorithm change to promote 'meaningful social interactions' inadvertently amplified divisive, toxic, and inflammatory content, leading to increased polarization and harm to democratic discourse globally.
In early 2018, Facebook implemented a major overhaul to its News Feed algorithm to boost 'meaningful social interactions' (MSI) between friends and family, with CEO Mark Zuckerberg stating the goal was to strengthen bonds and improve user well-being. The algorithm change created a point system where likes were worth 1 point, reactions and reshares 5 points, and significant comments or reshares 30 points, with additional multipliers for different relationship types. However, internal Facebook documents show that company researchers discovered the change was having the opposite effect, making the platform 'an angrier place.' Publishers and political parties began reorienting their content toward outrage and sensationalism to achieve high engagement. BuzzFeed CEO Jonah Peretti complained that divisive content like '21 Things That Almost All White People are Guilty of Saying' received 13,000 shares and 16,000 comments, while other content struggled. Facebook data scientists found that 'misinformation, toxicity, and violent content are inordinately prevalent among reshares.' Political parties in Europe told Facebook the algorithm made them shift toward negative messaging, with one party estimating they moved from 50/50 positive/negative posts to 80% negative content. In Poland, Spain, Taiwan, and India, political parties reported similar pressures to create inflammatory content. Facebook integrity team leader Anna Stepanov presented fixes to Mark Zuckerberg in April 2020, but he resisted changes that might hurt user engagement. The algorithm affected Facebook's nearly 3 billion users globally.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed