BackOverreliance on AI systems, which cannot be subsequently unpicked
Home/Risks/Government Office for Science (2023)/Overreliance on AI systems, which cannot be subsequently unpicked
Home/Risks/Government Office for Science (2023)/Overreliance on AI systems, which cannot be subsequently unpicked
Overreliance on AI systems, which cannot be subsequently unpicked
Risk Domain
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Government Office for Science (2023) (19)
Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Inequality
6.2 Increased inequality and decline in employment qualityAI systemUnintentionalPost-deployment
Environmental impacts
6.6 Environmental harmHumanUnintentionalPost-deployment
Amplification of biases
1.1 Unfair discrimination and misrepresentationHumanUnintentionalPre-deployment
Harmful responses
1.2 Exposure to toxic contentHumanUnintentionalPre-deployment
Lack of transparency and interpretability
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPre-deployment