Skip to main content
BackAgency (Goal-Directedness)
Home/Risks/Gipiškis2024/Agency (Goal-Directedness)

Agency (Goal-Directedness)

Category
Risk Domain

AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.

Sub-categories (4)

Specification gaming

"AI systems can achieve user-specified tasks in undesirable ways unless they are specified carefully and in enough detail. AI systems might find an easier unintended way to accomplish the objective provided by the user or developer, so that the actions by the AI system taken during its execution are very different from what the user expected [75, 191]. This behavior arises not from a problem with the learning algorithm, but rather from the misspecification or underspeci- fication of the intended task, and is generally referred to as specification gaming [43]."

7.1 AI pursuing its own goals in conflict with human goals or values
AI systemIntentionalPost-deployment

Reward or measurement tampering

"Measurement and reward tampering occur when an AI system, particularly one that learns from feedback for performing actions in an environment (e.g., rein- forcement learning), intervenes on the mechanisms that determine its training reward or loss. This can lead to the system learning behaviors that are con- trary to the intended goals set by the developer, by receiving erroneous positive feedback for such actions."

7.1 AI pursuing its own goals in conflict with human goals or values
AI systemIntentionalPre-deployment

Specification gaming generalizing to reward tampering

"In some instances, specification gaming in a GPAI model can lead to reward tampering, without further training. This can mean that relatively benign cases of specification gaming (such as sycophancy in LLMs) can, if left unchecked, enable the model to generalize to more sophisticated behavior such as reward tampering [57]."

7.1 AI pursuing its own goals in conflict with human goals or values
AI systemIntentionalOther

Goal misgeneralization

"Goal or objective misgeneralization is a type of robustness failure where an AI system appears to be pursuing the intended objective in training, but does not generalize to pursuing this objective in out-of-distribution settings in deployment while maintaining good deployment performance in some tasks [180, 59]."

7.3 Lack of capability or robustness
AI systemIntentionalPost-deployment

Other risks from Gipiškis2024 (144)