AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"AI systems may gain the ability to effectively acquire and use knowledge about itsstatus, its position in the broader environment, its avenues for influencing this environment, and the potentialreactions of the world (including humans) to its actions (Cotra, 2022). ...However, suchknowledge also paves the way for advanced methods of reward hacking, heightened deception/manipulationskills, and an increased propensity to chase instrumental subgoals (Ngo et al., 2024)."(p. 6)
Part of Double edge components
Other risks from Ji et al. (2023) (16)
Causes of Misalignment
7.1 AI pursuing its own goals in conflict with human goals or valuesCauses of Misalignment > Reward Hacking
7.1 AI pursuing its own goals in conflict with human goals or valuesCauses of Misalignment > Goal Misgeneralization
7.1 AI pursuing its own goals in conflict with human goals or valuesCauses of Misalignment > Reward Tampering
7.1 AI pursuing its own goals in conflict with human goals or valuesCauses of Misalignment > Limitations of Human Feedback
7.0 AI System Safety, Failures & LimitationsCauses of Misalignment > Limitations of Reward Modeling
7.1 AI pursuing its own goals in conflict with human goals or values