Skip to main content
BackExistential disaster because of misaligned superintelligence or power-seeking AI
Home/Risks/Maas (2023)/Existential disaster because of misaligned superintelligence or power-seeking AI

Existential disaster because of misaligned superintelligence or power-seeking AI

Advancing AI Governance: A Literature Review of Problems, Options, and Proposals

Maas (2023)

Sub-category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

-(p. 31)

Other risks from Maas (2023) (25)