Over- or under-reliance
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
"In AI-assisted decision-making tasks, reliance measures how much a person trusts (and potentially acts on) a model’s output. Over-reliance occurs when a person puts too much trust in a model, accepting a model’s output when the model’s output is likely incorrect. Under-reliance is the opposite, where the person doesn’t trust the model but should."
Supporting Evidence (1)
"In tasks where humans make choices based on AI-based suggestions, over/under reliance can lead to poor decision making because of the misplaced trust in the AI system, with negative consequences that increase with the importance of the decision."
Other risks from IBM2025 (63)
Lack of training data transparency
6.5 Governance failureUncertain data provenance
6.5 Governance failureData usage restrictions
7.3 Lack of capability or robustnessData acquisition restrictions
7.3 Lack of capability or robustnessData transfer restrictions
7.3 Lack of capability or robustnessPersonal information in data
2.1 Compromise of privacy by leaking or correctly inferring sensitive information