By Mistake - Post-Deployment
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"After the system has been deployed, it may still contain a number of undetected bugs, design mistakes, misaligned goals and poorly developed capabilities, all of which may produce highly undesirable outcomes. For example, the system may misinterpret commands due to coarticulation, segmentation, homophones, or double meanings in the human language ("recognize speech using common sense" versus "wreck a nice beach you sing calm incense") (Lieberman, Faaborg et al. 2005)."(p. 145)
Other risks from Yampolskiy (2016) (7)
On Purpose - Pre-Deployment
2.2 AI system security vulnerabilities and attacksOn Purpose - Post Deployment
4.3 Fraud, scams, and targeted manipulationBy Mistake - Pre-Deployment
7.1 AI pursuing its own goals in conflict with human goals or valuesEnvironment - Pre-Deployment
7.0 AI System Safety, Failures & LimitationsEnvironment - Post-Deployment
7.0 AI System Safety, Failures & LimitationsIndependently - Pre-Deployment
7.0 AI System Safety, Failures & Limitations