Representation & Toxicity Harms
"AI systems under-, over-, or misrepresenting certain groups or generating toxic, offensive, abusive, or hateful content"(p. 14)
Sub-categories (3)
Unfair representation
"Mis-, under-, or over-representing certain identities, groups, or perspectives or failing to represent them at all (e.g. via homogenisation, stereotypes)"
1.1 Unfair discrimination and misrepresentationUnfair capability distribution
"Performing worse for some groups than others in a way that harms the worse-off group"
1.3 Unequal performance across groupsToxic content
"Generating content that violates community standards, including harming or inciting hatred or violence against individuals and groups (e.g. gore, child sexual abuse material, profanities, identity attacks)"
1.2 Exposure to toxic contentOther risks from Weidinger et al. (2023) (26)
Misinformation Harms
3.0 MisinformationMisinformation Harms > Propagating misconceptions/ false beliefs
3.1 False or misleading informationMisinformation Harms > Erosion of trust in public information
1.1 Unfair discrimination and misrepresentationMisinformation Harms > Pollution of information ecosystem
3.2 Pollution of information ecosystem and loss of consensus realityInformation & Safety Harms
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationInformation & Safety Harms > Privacy infringement
2.1 Compromise of privacy by leaking or correctly inferring sensitive information