AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
"The technological maturity level describes how mature and error-free a certain technology is in a certain application context. If new technologies with a lower level of maturity are used in the development of the AI system, they may contain risks that are still unknown or difficult to assess.Mature technologies, on the other hand, usually have a greater variety of empirical data available, which means that risks can be identified and assessed more easily. However, with mature technologies, there is a risk that risk awareness decreases over time"(p. 24)
Other risks from Steimers & Schneider (2022) (7)
Fairness
1.1 Unfair discrimination and misrepresentationPrivacy
2.0 Privacy & SecurityDegree of Automation and Control
7.1 AI pursuing its own goals in conflict with human goals or valuesComplexity of the Intended Task and Usage Environment
7.3 Lack of capability or robustnessDegree of Transparency and Explainability
7.4 Lack of transparency or interpretabilitySecurity
2.2 AI system security vulnerabilities and attacks