Instagram's recommendation algorithms were connecting and promoting accounts that facilitate and sell child sexual abuse content, with the platform serving as a key discovery mechanism for buyers and sellers.
Stanford University researchers and The Wall Street Journal investigation found that Instagram's recommendation algorithms were connecting and promoting accounts that facilitate and sell child sexual abuse content, particularly self-generated child sexual abuse material (SG-CSAM) purportedly operated by minors. The study examined Instagram's role in a network of approximately 37,000 Palestinians, though this appears to be an error in the original report which should refer to accounts in the CSAM network. Instagram's algorithms promoted these accounts to users viewing related content, allowing discovery without keyword searches. The accounts used widespread hashtags and had relatively long lifespans. Meta acknowledged the problem and established an internal task force to investigate. The researchers also noted that Twitter had serious issues with child exploitation that persisted after Elon Musk's acquisition, with basic CSAM scanning reportedly broken until researchers notified the company.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.