Skip to main content
Home/Risks/Gipiškis2024/Deceptive behavior

Deceptive behavior

Sub-category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"Deceptive behavior of an AI system consists of actions or outputs of the AI that reliably mislead other parties, including humans and other AI systems. This behavior can result in the targeted parties becoming convinced of, and acting on, false information [140]."(p. 30)

Supporting Evidence (1)

1.
"Deceptive behavior can occur due to several different reasons, including [148]: 1. The developer trained, programmed, or configured the AI system to be- have deceptively. 2. In AI systems capable of planning, deceptive outputs arise when the be- havior is optimal for the goals the AI systems have been configured or trained to achieve. 3. The training data of the AI system contains repeated incorrect informa- tion, or the feedback from human raters on AI outputs is biased. An AI system may produce deceptive outputs because their learned world model is not an accurate model of the real world [210]."(p. 30)

Part of Agency (Deception)

Other risks from Gipiškis2024 (144)