Skip to main content
BackGeneral Evaluations (Incorrect outputs of GPAI evaluating other AI models)
Home/Risks/Gipiškis2024/General Evaluations (Incorrect outputs of GPAI evaluating other AI models)

General Evaluations (Incorrect outputs of GPAI evaluating other AI models)

Sub-category
Risk Domain

AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.

"When an LLM is configured to evaluate the performance of another model or AI system, it may produce incorrect evaluation outputs [122, 147]. For example, it may give a higher rating to a more verbose answer or an answer from a particular political stance. If an LLM-based evaluation is integrated into the training of a new model, the trained model could develop in a way that specifically finds and exploits limitations in the evaluator’s metrics."(p. 16)

Other risks from Gipiškis2024 (144)