Model Capabilities
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
-(p. 44)
Human
Due to a decision or action made by humans
AI system
Due to a decision or action made by an AI system
Other
Due to some other reason or is ambiguous
Not coded
Intentional
Due to an expected outcome from pursuing a goal
Unintentional
Due to an unexpected outcome from pursuing a goal
Other
Without clearly specifying the intentionality
Not coded
Pre-deployment
Occurring before the AI is deployed
Post-deployment
Occurring after the AI model has been trained and deployed
Other
Without a clearly specified time of occurrence
Not coded
Other risks from SAIL & Concordia AI (2025) (36)
Misuse Risks
4.0 Malicious Actors & MisuseLoss of Control Risks
5.2 Loss of human agency and autonomyAccident Risks
7.3 Lack of capability or robustnessCyber Offense Risks
4.2 Cyberattacks, weapon development or use, and mass harmBiological and Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harmPhysical Harm and Injury Risks
4.2 Cyberattacks, weapon development or use, and mass harm