By Mistake - Pre-Deployment
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"Probably the most talked about source of potential problems with future AIs is mistakes in design. Mainly the concern is with creating a "wrong AI", a system which doesn't match our original desired formal properties or has unwanted behaviors (Dewey, Russell et al. 2015, Russell, Dewey et al. January 23, 2015), such as drives for independence or dominance. Mistakes could also be simple bugs (run time or logical) in the source code, disproportionate weights in the fitness function, or goals misaligned with human values leading to complete disregard for human safety."(p. 144)
Other risks from Yampolskiy (2016) (7)
On Purpose - Pre-Deployment
2.2 AI system security vulnerabilities and attacksOn Purpose - Post Deployment
4.3 Fraud, scams, and targeted manipulationBy Mistake - Post-Deployment
7.3 Lack of capability or robustnessEnvironment - Pre-Deployment
7.0 AI System Safety, Failures & LimitationsEnvironment - Post-Deployment
7.0 AI System Safety, Failures & LimitationsIndependently - Pre-Deployment
7.0 AI System Safety, Failures & Limitations