AGIs being given or developing unsafe goals
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"The risks associated with AGI goal safety, including human attempts at making goals safe, as well as the AGI making its own goals safe during self-improvement."(p. 660)
Other risks from McLean et al. (2023) (5)
AGI removing itself from the control of human owners/managers
7.1 AI pursuing its own goals in conflict with human goals or valuesDevelopment of unsafe AGI
6.4 Competitive dynamicsAGIs with poor ethics, morals and values
7.3 Lack of capability or robustnessInadequate management of AGI
6.5 Governance failureExistential risks
7.1 AI pursuing its own goals in conflict with human goals or values