Skip to main content

Alignment

Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

Hagendorff (2024)

Category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

The general tenet of AI alignment involves training generative AI systems to be harmless, helpful, and honest, ensuring their behavior aligns with and respects human values. However, a central debate in this area concerns the methodological challenges in selecting appropriate values. While AI systems can acquire human values through feedback, observation, or debate, there remains ambiguity over which individuals are qualified or legitimized to provide these guiding signals. Another prominent issue pertains to deceptive alignment, which might cause generative AI systems to tamper evaluations. Additionally, many papers explore risks associated with reward hacking, proxy gaming, or goal misgeneralization in generative AI systems.(p. 7)

Other risks from Hagendorff (2024) (16)