TikTok's recommendation algorithm systematically promoted eating disorder, self-harm, and suicide content to teenage users, including those registered as 13 years old, within minutes of expressing interest in such topics.
Research by the Center for Countering Digital Hate (CCDH) found that TikTok's recommendation algorithm pushes harmful content to teenage users within minutes of them showing interest in related topics. The study created accounts registered as 13-year-olds in the US, UK, Canada and Australia, with some accounts having usernames containing 'loseweight' to simulate vulnerable users. After briefly pausing on and liking videos about body image, eating disorders and mental health during a 30-minute period, the accounts received content about suicide within nearly three minutes and eating disorder material within eight minutes. Vulnerable accounts received 12 times as many recommendations for self-harm and suicide-related videos as standard accounts, with mental health or body image content shown every 27 seconds to vulnerable accounts. The platform showed content including dangerously restrictive diets, pro-self-harm content, and content romanticizing suicide. Tags like #proana and #thinsp0 had millions of views despite policies against such content. Users reported being trapped in algorithmic feeds of triggering content, with some deleting their accounts to escape the cycle. TikTok's algorithm continues showing triggering content if users engage with it, creating what experts describe as harmful echo chambers for vulnerable teenagers.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.