Skip to main content
BackAnthropomorphising systems can lead to overreliance or unsafe use
Home/Risks/Weidinger et al. (2021)/Anthropomorphising systems can lead to overreliance or unsafe use

Anthropomorphising systems can lead to overreliance or unsafe use

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Sub-category
Risk Domain

Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

"...humans interacting with conversational agents may come to think of these agents as human-like. Anthropomorphising LMs may inflate users’ estimates of the conversational agent’s competencies...As a result, they may place undue confidence, trust, or expectations in these agents...This can result in different risks of harm, for example when human users rely on conversational agents in domains where this may cause knock-on harms, such as requesting psychotherapy...Anthropomorphisation may amplify risks of users yielding effective control by coming to trust conversational agents “blindly”. Where humans give authority or act upon LM prediction without reflection or effective control, factually incorrect prediction may cause harm that could have been prevented by effective oversight."(p. 29)

Part of Human-Computer Interaction Harms

Other risks from Weidinger et al. (2021) (26)