AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"How do we get an AGI to work towards the right goals? MIRI calls this value specification. Bostrom (2014) discusses this problem at length, ar- guing that it is much harder than one might naively think. Davis (2015) criticizes Bostrom’s argument, and Bensinger (2015) defends Bostrom against Davis’ criticism. Reward corruption, reward gaming, and negative side effects are subproblems of value specification highlighted in the DeepMind and OpenAI agendas."(p. 8)
Other risks from Everitt. Lea & Hutter (2018) (8)
Reliability
7.1 AI pursuing its own goals in conflict with human goals or valuesCorrigibility
7.1 AI pursuing its own goals in conflict with human goals or valuesSecurity
2.2 AI system security vulnerabilities and attacksSafe learning
7.3 Lack of capability or robustnessIntelligibility
7.4 Lack of transparency or interpretabilitySubagents
7.2 AI possessing dangerous capabilities