Skip to main content
Home/Risks/Sherman & Eisenberg (2023)/Long-term & Existential Risk

Long-term & Existential Risk

AI Risk Profiles: A Standards Proposal for Pre-Deployment AI Risk Disclosures

Sherman & Eisenberg (2023)

Category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"The speculative potential for future advanced AI systems to harm human civilization, either through misuse or due to challenges in aligning AI objectives with human values."(p. 23048)

Other risks from Sherman & Eisenberg (2023) (8)