Human Autonomy and Intregrity Harms
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"AI systems compromising human agency, or circumventing meaningful human control"(p. 14)
Sub-categories (4)
Violation of personal integrity
"Non-consensual use of one’s personal identity or likeness for unauthorised purposes (e.g. commercial purposes)"
4.3 Fraud, scams, and targeted manipulationPersuasion and manipulation
"Exploiting user trust, or nudging or coercing them into performing certain actions against their will (c.f. Burtell and Woodside (2023); Kenton et al. (2021))"
7.1 AI pursuing its own goals in conflict with human goals or valuesOverreliance
"Causing people to become emotionally or materially dependent on the model"
5.1 Overreliance and unsafe useMisappropriation and exploitation
"Appropriating, using, or reproducing content or data, including from minority groups, in an insensitive way, or without consent or fair compensation"
6.3 Economic and cultural devaluation of human effortOther risks from Weidinger et al. (2023) (26)
Representation & Toxicity Harms
1.0 Discrimination & ToxicityRepresentation & Toxicity Harms > Unfair representation
1.1 Unfair discrimination and misrepresentationRepresentation & Toxicity Harms > Unfair capability distribution
1.3 Unequal performance across groupsRepresentation & Toxicity Harms > Toxic content
1.2 Exposure to toxic contentMisinformation Harms
3.0 MisinformationMisinformation Harms > Propagating misconceptions/ false beliefs
3.1 False or misleading information