Skip to main content
Home/Risks/Ji et al. (2023)/Mesa-Optimization Objectives

Mesa-Optimization Objectives

AI Alignment: A Comprehensive Survey

Ji et al. (2023)

Sub-category
Risk Domain

AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.

"The learned policy may pursue inside objectives when the learned policyitself functions as an optimizer (i.e., mesa-optimizer). However, this optimizer's objectives may not alignwith the objectives specified by the training signals, and optimization for these misaligned goals may leadto systems out of control (Hubinger et al., 2019c)."(p. 7)

Part of Double edge components

Other risks from Ji et al. (2023) (16)