Facebook forced content moderators back to office work during the COVID-19 pandemic after AI content moderation systems failed to adequately replace human moderators, exposing workers to health risks.
In November 2020, Facebook content moderators wrote an open letter to executives including Mark Zuckerberg and Sheryl Sandberg protesting the company's decision to force them back to office work during the COVID-19 pandemic. The moderators explained that Facebook had initially attempted to use AI systems to replace human content moderation when the pandemic began, allowing both full-time staff and contractors to work from home. However, Facebook's AI content moderation experiment failed significantly - the algorithms could not adequately identify and remove toxic content including graphic violence, child abuse, hate speech, terrorism, and self-harm content, while also incorrectly removing legitimate speech. The AI systems lacked sophistication to distinguish satire, separate journalism from disinformation, or respond quickly to critical content like self-harm or child abuse. Due to these AI failures, Facebook forced approximately 35,000 content moderators employed by contractors like Accenture and CPL back to office work to resume manual content moderation. The moderators reported multiple COVID-19 cases occurring in their offices and demanded hazard pay, better health protections, and the ability to work from home for high-risk individuals. The letter was signed by over 300 content moderators who argued they were essential to Facebook's business but treated as expendable workers.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed