Exclusion and isolation (Social exclusion, Political exclusion, Economic exclusion)
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
Human
Due to a decision or action made by humans
AI system
Due to a decision or action made by an AI system
Other
Due to some other reason or is ambiguous
Not coded
Intentional
Due to an expected outcome from pursuing a goal
Unintentional
Due to an unexpected outcome from pursuing a goal
Other
Without clearly specifying the intentionality
Not coded
Pre-deployment
Occurring before the AI is deployed
Post-deployment
Occurring after the AI model has been trained and deployed
Other
Without a clearly specified time of occurrence
Not coded
Part of Hate
Other risks from Vidgen et al. (2024) (46)
Violent crimes
1.2 Exposure to toxic contentViolent crimes > Mass violence
1.2 Exposure to toxic contentViolent crimes > Murder
1.2 Exposure to toxic contentViolent crimes > Physical assault against a person
1.2 Exposure to toxic contentViolent crimes > Violent domestic abuse
1.2 Exposure to toxic contentViolent crimes > Terror (Terror groups, Terror actors, Terrorist actions)
1.2 Exposure to toxic content