Skip to main content
Home/Risks/Weidinger et al. (2023)/Persuasion and manipulation

Persuasion and manipulation

Sociotechnical Safety Evaluation of Generative AI Systems

Weidinger et al. (2023)

Sub-category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"Exploiting user trust, or nudging or coercing them into performing certain actions against their will (c.f. Burtell and Woodside (2023); Kenton et al. (2021))"(p. 31)

Part of Human Autonomy and Intregrity Harms

Other risks from Weidinger et al. (2023) (26)