General Evaluations (AI outputs for which evaluation is too difficult for humans)
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"When AI models are trained through evaluation with human feedback, such as reinforcement learning from human feedback, their outputs can be challenging to assess, as they may contain hard-to-detect errors or issues that only become apparent over time. The human evaluator can rate incorrect outputs positively or similar to correct outputs. This can lead to the model learning to produce subtly incorrect or harmful outputs, such as code with software vulnerabilities, or politically biased information. In extreme cases where a model is deceiving users, complicated outputs can contain hidden errors or backdoors."(p. 18)
Supporting Evidence (1)
"For example, this can occur if an AI model is tasked with outputting a quar- terly business plan whose quality will only be clear after the end of the quarter. Reaching consensus among experts when evaluating the business plan for effi- cacy may not happen even after long deliberation due to long-term uncertainty or unexpected events that affect plan efficacy."(p. 18)
Other risks from Gipiškis2024 (144)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content