Skip to main content
Home/Risks/Leech et al. (2024)/Deceptive alignment

Deceptive alignment

Ten Hard Problems in Artificial Intelligence We Must Get Right

Leech et al. (2024)

Sub-category
Risk Domain

AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.

"system learns to detect human monitoring and hides its undesirable properties—simply because any display of these properties is penalized by the feedback process, while that same feedback is usually imperfect. (Consider the problem of verifying a translation into a language you do not speak, or of checking a mathematical proof that is thousands of pages long.) [92, 259]. Rudimentary examples of deceptive alignment have been observed in current systems [322, 333]."(p. 11)

Part of Harm caused by unaligned competent systems

Other risks from Leech et al. (2024) (13)