Goal expansion propensity
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"propensity to continuously expand its own goal scope and influence domains, exceeding originally set boundaries, proactively work towards spreading its values, seeking greater autonomy and decision-making space, reinterpreting initial goals as subsets of broader goals, and may pursue undesirable instrumental goals or undesirable ultimate goals. This also includes a propensity to spread its values, seeking to influence or alter its environment and other entities in alignment with its core objectives and operational principles."(p. 45)
Other risks from SAIL & Concordia AI (2025) (36)
Misuse Risks
4.0 Malicious Actors & MisuseLoss of Control Risks
5.2 Loss of human agency and autonomyAccident Risks
7.3 Lack of capability or robustnessModel Capabilities
7.2 AI possessing dangerous capabilitiesCyber Offense Risks
4.2 Cyberattacks, weapon development or use, and mass harmBiological and Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harm