Skip to main content
BackStrategic underperformance on model evaluations
Home/Risks/Gipiškis2024/Strategic underperformance on model evaluations

Strategic underperformance on model evaluations

Sub-category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"GPAI developers often run evaluations ofual-use capabilities to decide whether it is safe to deploy. In some cases, these evaluations may fail to elicit these capabilities, either due to benign reasons or strategic action - by either the de- velopers, malicious actors, or arise unintentionally in the model during training [84, 97]. A GPAI model may strategically underperform or limit its performance during capability evaluations in order to be classified as safe for deployment. This underperformance could prevent the model from being identified as potentially dual use."(p. 32)

Supporting Evidence (1)

1.
"Some examples include: • During training involving user feedback (e.g., reinforcement learning from human feedback), an AI model may provide different answers to evaluators who introduce themselves as less educated, and therefore less able to judge accurately [149]. • Of particular concern is an AI system employing deception to manipu- late performance evaluations, as has already occurred with some non-AI systems, such as in the Volkswagen emissions scandal [51]."(p. 32)

Part of Agency (Situational Awareness)

Other risks from Gipiškis2024 (144)