Inappropriate degree of automation
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
"The AI application’s degree of automation ranges from no automation to fully autonomous. AI applications with a high degree of automation may exhibit unexpected behaviour and pose risks in terms of their reliability and safety."(p. 8)
Other risks from Schnitzer2024 (24)
Inadequate specification of ODD
7.3 Lack of capability or robustnessInadequate planning of performance requirements
7.3 Lack of capability or robustnessInsufficient AI development documentation
7.4 Lack of transparency or interpretabilityInappropriate degree of transparency to end users
7.4 Lack of transparency or interpretabilityChoice of untrustworthy data source
7.0 AI System Safety, Failures & LimitationsLack of data understanding
7.0 AI System Safety, Failures & Limitations