Instagram's algorithm repeatedly recommended suicide, self-harm and depression content to 14-year-old Molly Russell, with a coroner ruling that the platform contributed to her death in 2017.
In November 2017, 14-year-old Molly Russell died by suicide in London. A coroner's inquest revealed that in the six months before her death, Molly had viewed, liked or saved 16,300 pieces of content on Instagram, with about 2,100 posts (approximately 12 per day) related to suicide, self-harm and depression. The investigation found that Molly had interacted with accounts dedicated to sharing depressive and suicidal material, often using hashtags that linked to other explicit content. Many posts glorified inner struggle and hiding emotional distress. The coroner ruled in October 2022 that Instagram and other social media platforms had 'contributed to her death in a more than minimal way' and that online content 'affected her mental health in a negative way.' Meta acknowledged that Molly had seen some content that violated its policies but defended its overall practices. The case involved analysis of over 30,000 pages of material from Molly's devices and more than 16,000 pages from her Instagram account provided by Meta after a lengthy legal battle.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed