On Purpose - Post Deployment
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
"Just because developers might succeed in creating a safe AI, it doesn't mean that it will not become unsafe at some later point. In other words, a perfectly friendly AI could be switched to the "dark side" during the post-deployment stage. This can happen rather innocuously as a result of someone lying to the AI and purposefully supplying it with incorrect information or more explicitly as a result of someone giving the AI orders to perform illegal or dangerous actions against others."(p. 144)
Other risks from Yampolskiy (2016) (7)
On Purpose - Pre-Deployment
2.2 AI system security vulnerabilities and attacksBy Mistake - Pre-Deployment
7.1 AI pursuing its own goals in conflict with human goals or valuesBy Mistake - Post-Deployment
7.3 Lack of capability or robustnessEnvironment - Pre-Deployment
7.0 AI System Safety, Failures & LimitationsEnvironment - Post-Deployment
7.0 AI System Safety, Failures & LimitationsIndependently - Pre-Deployment
7.0 AI System Safety, Failures & Limitations