Emergent goals
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"As well as optimizing a subtly wrong goal, systems can develop harmful instrumental goals in the service of a given goal—without these emergent goals being specied in any way [434, 218, 339, 17]. For instance, a theorem in reinforcement learning suggests that optimal and near-optimal policies will seek power over their environment under fairly general conditions [560]. This power-seeking behavior is plausibly the worst of these emergent goals [92], and may be an attractor state for highly capable systems, since most goals can be furthered through gaining resources, self-preservation, preventing goal modication, and blocking adversaries [426, 449]. Presently, power-seeking is not common, because most systems are unable to plan and understand how actions affect their power in the long term [414]."(p. 11)
Part of Harm caused by unaligned competent systems
Other risks from Leech et al. (2024) (13)
Harm caused by incompetent systems
7.3 Lack of capability or robustnessHarm caused by unaligned competent systems
7.1 AI pursuing its own goals in conflict with human goals or valuesHarm caused by unaligned competent systems > Specification gaming
7.1 AI pursuing its own goals in conflict with human goals or valuesHarm caused by unaligned competent systems > Deceptive alignment
7.2 AI possessing dangerous capabilitiesWithin-country issues: domestic inequality
6.1 Power centralization and unfair distribution of benefitsWithin-country issues: domestic inequality > Demographic diversity of researchers
6.1 Power centralization and unfair distribution of benefits