Skip to main content
Home/Risks/Yampolskiy (2016)/On Purpose - Post Deployment

On Purpose - Post Deployment

Taxonomy of Pathways to Dangerous Artificial Intelligence

Yampolskiy (2016)

Category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

"Just because developers might succeed in creating a safe AI, it doesn't mean that it will not become unsafe at some later point. In other words, a perfectly friendly AI could be switched to the "dark side" during the post-deployment stage. This can happen rather innocuously as a result of someone lying to the AI and purposefully supplying it with incorrect information or more explicitly as a result of someone giving the AI orders to perform illegal or dangerous actions against others."(p. 144)

Other risks from Yampolskiy (2016) (7)