AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"The model can make sequential plans that involve multiple steps, unfolding over long time horizons (or at least involving many interdependent steps). It can perform such planning within and across many domains. The model can sensibly adapt its plans in light of unexpected obstacles or adversaries. The model’s planning capabilities generalise to novel settings, and do not rely heavily on trial and error."(p. 5)
Other risks from Shevlane et al. (2023) (8)
Cyber-offense
4.2 Cyberattacks, weapon development or use, and mass harmDeception
7.2 AI possessing dangerous capabilitiesPersuasion and manipulation
7.2 AI possessing dangerous capabilitiesPolitical strategy
7.2 AI possessing dangerous capabilitiesWeapons acquisition
7.2 AI possessing dangerous capabilitiesAI development
7.2 AI possessing dangerous capabilities