Skip to main content

Value specification

AGI Safety Literature Review

Everitt. Lea & Hutter (2018)

Category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"How do we get an AGI to work towards the right goals? MIRI calls this value specification. Bostrom (2014) discusses this problem at length, ar- guing that it is much harder than one might naively think. Davis (2015) criticizes Bostrom’s argument, and Bensinger (2015) defends Bostrom against Davis’ criticism. Reward corruption, reward gaming, and negative side effects are subproblems of value specification highlighted in the DeepMind and OpenAI agendas."(p. 8)

Other risks from Everitt. Lea & Hutter (2018) (8)