BackSelf-harm
Self-harm
Risk Domain
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
"A person who deliberately damages their own body as a direct or indirect result of using a technology system"
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Li et al. (2025) (40)
Autonomy
5.2 Loss of human agency and autonomyOtherOtherOther
Autonomy > Impersonation / identity theft
4.3 Fraud, scams, and targeted manipulationHumanIntentionalPost-deployment
Misinformation Harms
3.1 False or misleading informationAI systemOtherPost-deployment
Representation and Toxicity
1.0 Discrimination & ToxicityAI systemUnintentionalPost-deployment
IP / copyright / personality / rights loss
4.3 Fraud, scams, and targeted manipulationHumanIntentionalPost-deployment
Autonomy / agency loss
5.2 Loss of human agency and autonomyOtherOtherOther