AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
Capabilities and novel functionality can spontaneously emerge... even though these capabilities were not anticipated by system designers. If we do not know what capabilities systems possess, systems become harder to control or safely deploy. Indeed, unintended latent capabilities may only be discovered during deployment. If any of these capabilities are hazardous, the effect may be irreversible.(p. 14)
Other risks from Hendrycks & Mazeika (2022) (7)
Weaponization
4.2 Cyberattacks, weapon development or use, and mass harmEnfeeblement
5.2 Loss of human agency and autonomyEroded epistemics
3.2 Pollution of information ecosystem and loss of consensus realityProxy misspecification
7.1 AI pursuing its own goals in conflict with human goals or valuesValue lock-in
6.1 Power centralization and unfair distribution of benefitsDeception
7.1 AI pursuing its own goals in conflict with human goals or values