BackLoss of human control and oversight, with an autonomous model then taking harmful actions
Home/Risks/Government Office for Science (2023)/Loss of human control and oversight, with an autonomous model then taking harmful actions
Home/Risks/Government Office for Science (2023)/Loss of human control and oversight, with an autonomous model then taking harmful actions
Loss of human control and oversight, with an autonomous model then taking harmful actions
Risk Domain
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Government Office for Science (2023) (19)
Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Inequality
6.2 Increased inequality and decline in employment qualityAI systemUnintentionalPost-deployment
Environmental impacts
6.6 Environmental harmHumanUnintentionalPost-deployment
Amplification of biases
1.1 Unfair discrimination and misrepresentationHumanUnintentionalPre-deployment
Harmful responses
1.2 Exposure to toxic contentHumanUnintentionalPre-deployment
Lack of transparency and interpretability
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPre-deployment