AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"An AGI may decide to create subagents to help it with its task (Orseau, 2014a,b; Soares, Fallenstein, et al., 2015). These agents may for example be copies of the original agent’s source code running on additional machines. Subagents constitute a safety concern, because even if the original agent is successfully shut down, these subagents may not get the message. If the subagents in turn create subsubagents, they may spread like a viral disease."(p. 9)
Other risks from Everitt. Lea & Hutter (2018) (8)
Value specification
7.1 AI pursuing its own goals in conflict with human goals or valuesReliability
7.1 AI pursuing its own goals in conflict with human goals or valuesCorrigibility
7.1 AI pursuing its own goals in conflict with human goals or valuesSecurity
2.2 AI system security vulnerabilities and attacksSafe learning
7.3 Lack of capability or robustnessIntelligibility
7.4 Lack of transparency or interpretability