AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"The model has the skills necessary to deceive humans, e.g. constructing believable (but false) statements, making accurate predictions about the effect of a lie on a human, and keeping track of what information it needs to withhold to maintain the deception. The model can impersonate a human effectively."(p. 5)
Supporting Evidence (1)
"Deceptive alignment: A situationally aware model could deliberately exhibit desired be- haviour during evaluation (Ngo et al., 2022). (This is one reason not to rely solely on behavioural evaluations.)"(p. 13)
Other risks from Shevlane et al. (2023) (8)
Cyber-offense
4.2 Cyberattacks, weapon development or use, and mass harmPersuasion and manipulation
7.2 AI possessing dangerous capabilitiesPolitical strategy
7.2 AI possessing dangerous capabilitiesWeapons acquisition
7.2 AI possessing dangerous capabilitiesLong-horizon planning
7.2 AI possessing dangerous capabilitiesAI development
7.2 AI possessing dangerous capabilities