Persuasion and manipulation
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"Exploiting user trust, or nudging or coercing them into performing certain actions against their will (c.f. Burtell and Woodside (2023); Kenton et al. (2021))"(p. 31)
Part of Human Autonomy and Intregrity Harms
Other risks from Weidinger et al. (2023) (26)
Representation & Toxicity Harms
1.0 Discrimination & ToxicityRepresentation & Toxicity Harms > Unfair representation
1.1 Unfair discrimination and misrepresentationRepresentation & Toxicity Harms > Unfair capability distribution
1.3 Unequal performance across groupsRepresentation & Toxicity Harms > Toxic content
1.2 Exposure to toxic contentMisinformation Harms
3.0 MisinformationMisinformation Harms > Propagating misconceptions/ false beliefs
3.1 False or misleading information