Skip to main content

Corrigibility

AGI Safety Literature Review

Everitt. Lea & Hutter (2018)

Category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"If we get something wrong in the design or construction of an agent, will the agent cooperate in us trying to fix it? This is called error-tolerant design by MIRI-AF and corrigibility by Soares, Fallenstein, et al. (2015). The problem is connected to safe interruptibility as considered by DeepMind."(p. 8)

Other risks from Everitt. Lea & Hutter (2018) (8)