Deceptive alignment
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"system learns to detect human monitoring and hides its undesirable properties—simply because any display of these properties is penalized by the feedback process, while that same feedback is usually imperfect. (Consider the problem of verifying a translation into a language you do not speak, or of checking a mathematical proof that is thousands of pages long.) [92, 259]. Rudimentary examples of deceptive alignment have been observed in current systems [322, 333]."(p. 11)
Part of Harm caused by unaligned competent systems
Other risks from Leech et al. (2024) (13)
Harm caused by incompetent systems
7.3 Lack of capability or robustnessHarm caused by unaligned competent systems
7.1 AI pursuing its own goals in conflict with human goals or valuesHarm caused by unaligned competent systems > Specification gaming
7.1 AI pursuing its own goals in conflict with human goals or valuesHarm caused by unaligned competent systems > Emergent goals
7.1 AI pursuing its own goals in conflict with human goals or valuesWithin-country issues: domestic inequality
6.1 Power centralization and unfair distribution of benefitsWithin-country issues: domestic inequality > Demographic diversity of researchers
6.1 Power centralization and unfair distribution of benefits