Facebook's algorithms exposed users with low digital literacy skills to significantly more violent and sexual content, with these vulnerable users - predominantly older adults, people of color, and those with lower education - experiencing 11-13% more disturbing content than digitally skilled users.
Facebook conducted internal research studies examining how its content recommendation algorithms affected users with different levels of digital literacy. In a survey of 67,000 users across 17 countries, researchers found that users who failed to answer basic questions about Facebook features correctly saw 11.4% more nudity and 13.4% more graphic violence compared to users who answered all questions correctly. The company identified that between one-quarter and one-third of all Facebook users qualify as low-tech-skilled, including roughly one-sixth of U.S. users and up to half of users in emerging markets. These vulnerable users were significantly more likely to be older, people of color, lower-educated, and of lower socioeconomic status. Through in-depth interviews and home visits with 18 vulnerable users, Facebook found that disturbing content caused them to disconnect from the platform for extended periods and exacerbated existing hardships. Examples included a middle-aged Black woman repeatedly shown posts about racial hatred and child bullying, and a Narcotics Anonymous group member receiving alcohol advertisements. The research revealed that low-literacy users lacked knowledge of content control features like hide, unfollow, and block functions, so algorithms interpreted their passive scrolling as approval and served more disturbing content.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed