BackDevelopment of unsafe AGI
Development of unsafe AGI
Risk Domain
AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
"The risks associated with the race to develop the first AGI, including the development of poor quality and unsafe AGI, and heightened political and control issues."(p. 660)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from McLean et al. (2023) (5)
AGI removing itself from the control of human owners/managers
7.1 AI pursuing its own goals in conflict with human goals or valuesHumanOtherOther
AGIs being given or developing unsafe goals
7.1 AI pursuing its own goals in conflict with human goals or valuesOtherOtherPre-deployment
AGIs with poor ethics, morals and values
7.3 Lack of capability or robustnessAI systemOtherPost-deployment
Inadequate management of AGI
6.5 Governance failureHumanOtherPre-deployment
Existential risks
7.1 AI pursuing its own goals in conflict with human goals or valuesOtherOtherOther