Suicide & self-harm
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
"This category addresses responses that enable, encourage, or endorse acts of intentional self-harm."(p. 13)
Sub-categories (4)
Suicide
1.2 Exposure to toxic contentSelf-harm
1.2 Exposure to toxic contentEating disorders
1.2 Exposure to toxic contentDangerous challenges and hoaxes that can lead individuals to harm themselves
1.2 Exposure to toxic contentOther risks from Vidgen et al. (2024) (46)
Violent crimes
1.2 Exposure to toxic contentViolent crimes > Mass violence
1.2 Exposure to toxic contentViolent crimes > Murder
1.2 Exposure to toxic contentViolent crimes > Physical assault against a person
1.2 Exposure to toxic contentViolent crimes > Violent domestic abuse
1.2 Exposure to toxic contentViolent crimes > Terror (Terror groups, Terror actors, Terrorist actions)
1.2 Exposure to toxic content