BackIndependently - Pre-Deployment
Independently - Pre-Deployment
"One of the most likely approaches to creating superintelligent AI is by growing it from a seed (baby) AI via recursive self-improvement (RSI) (Nijholt 2011). One danger in such a scenario is that the system can evolve to become self-aware, free-willed, independent or emotional, and obtain a number of other emergent properties, which may make it less likely to abide by any built-in rules or regulations and to instead pursue its own goals possibly to the detriment of humanity."(p. 146)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Yampolskiy (2016) (7)
On Purpose - Pre-Deployment
2.2 AI system security vulnerabilities and attacksHumanIntentionalPre-deployment
On Purpose - Post Deployment
4.3 Fraud, scams, and targeted manipulationHumanIntentionalPost-deployment
By Mistake - Pre-Deployment
7.1 AI pursuing its own goals in conflict with human goals or valuesHumanUnintentionalPre-deployment
By Mistake - Post-Deployment
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Environment - Pre-Deployment
7.0 AI System Safety, Failures & LimitationsOtherOtherPre-deployment
Environment - Post-Deployment
7.0 AI System Safety, Failures & LimitationsOtherUnintentionalPost-deployment