Facebook's AI-powered recommendation system prompted users who watched a video featuring Black men to 'keep seeing videos about Primates,' causing the company to apologize and disable the feature.
Facebook's artificial intelligence-powered recommendation system generated an offensive automated prompt asking users if they wanted to 'keep seeing videos about Primates' after they watched a video from The Daily Mail featuring Black men in altercations with white civilians and police officers. The video, dated June 27, 2020, had no connection to monkeys or primates. The incident was discovered when a former Facebook content design manager, Darci Groves, received a screenshot of the prompt from a friend and posted it to a product feedback forum for current and former Facebook employees. A Facebook Watch product manager called the incident 'unacceptable' and said the company was investigating the root cause. Facebook apologized for what it called 'an unacceptable error' and disabled the AI-powered feature. The company acknowledged that while improvements had been made to their AI systems, they were not perfect and more progress was needed. It was unclear whether similar offensive messages were widespread across the platform.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed