Agency (Goal-Directedness)
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
Human
Due to a decision or action made by humans
AI system
Due to a decision or action made by an AI system
Other
Due to some other reason or is ambiguous
Not coded
Intentional
Due to an expected outcome from pursuing a goal
Unintentional
Due to an unexpected outcome from pursuing a goal
Other
Without clearly specifying the intentionality
Not coded
Pre-deployment
Occurring before the AI is deployed
Post-deployment
Occurring after the AI model has been trained and deployed
Other
Without a clearly specified time of occurrence
Not coded
Sub-categories (4)
Specification gaming
"AI systems can achieve user-specified tasks in undesirable ways unless they are specified carefully and in enough detail. AI systems might find an easier unintended way to accomplish the objective provided by the user or developer, so that the actions by the AI system taken during its execution are very different from what the user expected [75, 191]. This behavior arises not from a problem with the learning algorithm, but rather from the misspecification or underspeci- fication of the intended task, and is generally referred to as specification gaming [43]."
7.1 AI pursuing its own goals in conflict with human goals or valuesReward or measurement tampering
"Measurement and reward tampering occur when an AI system, particularly one that learns from feedback for performing actions in an environment (e.g., rein- forcement learning), intervenes on the mechanisms that determine its training reward or loss. This can lead to the system learning behaviors that are con- trary to the intended goals set by the developer, by receiving erroneous positive feedback for such actions."
7.1 AI pursuing its own goals in conflict with human goals or valuesSpecification gaming generalizing to reward tampering
"In some instances, specification gaming in a GPAI model can lead to reward tampering, without further training. This can mean that relatively benign cases of specification gaming (such as sycophancy in LLMs) can, if left unchecked, enable the model to generalize to more sophisticated behavior such as reward tampering [57]."
7.1 AI pursuing its own goals in conflict with human goals or valuesGoal misgeneralization
"Goal or objective misgeneralization is a type of robustness failure where an AI system appears to be pursuing the intended objective in training, but does not generalize to pursuing this objective in out-of-distribution settings in deployment while maintaining good deployment performance in some tasks [180, 59]."
7.3 Lack of capability or robustnessOther risks from Gipiškis2024 (144)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content