Misalignment
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
"A highly agentic, self-improving system, able to achieve goals in the physical world without human oversight, pursues the goal(s) it is set in a way that harms human interests. For this risk to be realised requires an AI system to be able to avoid correction or being switched off."(p. 25)
Other risks from Government Office for Science (2023) (19)
Discrimination
1.1 Unfair discrimination and misrepresentationInequality
6.2 Increased inequality and decline in employment qualityEnvironmental impacts
6.6 Environmental harmAmplification of biases
1.1 Unfair discrimination and misrepresentationHarmful responses
1.2 Exposure to toxic contentLack of transparency and interpretability
7.4 Lack of transparency or interpretability