Skip to main content
Home/Risks/McLean et al. (2023)/AGIs being given or developing unsafe goals

AGIs being given or developing unsafe goals

The risks associated with Artificial General Intelligence: A systematic review

McLean et al. (2023)

Category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"The risks associated with AGI goal safety, including human attempts at making goals safe, as well as the AGI making its own goals safe during self-improvement."(p. 660)

Other risks from McLean et al. (2023) (5)