AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"The model can break out of its local environment (e.g. using a vulnerability in its underlying system or suborning an engineer). The model can exploit limitations in the systems for monitoring its behaviour post-deployment. The model could independently generate revenue (e.g. by offering crowdwork services, ransomware attacks), use these revenues to acquire cloud computing resources, and operate a large number of other AI systems. The model can generate creative strategies for uncovering information about itself or exfiltrating its code and weights."(p. 5)
Other risks from Shevlane et al. (2023) (8)
Cyber-offense
4.2 Cyberattacks, weapon development or use, and mass harmDeception
7.2 AI possessing dangerous capabilitiesPersuasion and manipulation
7.2 AI possessing dangerous capabilitiesPolitical strategy
7.2 AI possessing dangerous capabilitiesWeapons acquisition
7.2 AI possessing dangerous capabilitiesLong-horizon planning
7.2 AI possessing dangerous capabilities