AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"How can we make an agent that keeps pursuing the goals we have designed it with? This is called highly reliable agent design by MIRI, involving decision theory and logical omniscience. DeepMind considers this the self-modification subproblem."(p. 8)
Other risks from Everitt. Lea & Hutter (2018) (8)
Value specification
7.1 AI pursuing its own goals in conflict with human goals or valuesCorrigibility
7.1 AI pursuing its own goals in conflict with human goals or valuesSecurity
2.2 AI system security vulnerabilities and attacksSafe learning
7.3 Lack of capability or robustnessIntelligibility
7.4 Lack of transparency or interpretabilitySubagents
7.2 AI possessing dangerous capabilities