Skip to main content
BackAnthropomorphising systems can lead to overreliance and unsafe use
Home/Risks/Weidinger et al. (2022)/Anthropomorphising systems can lead to overreliance and unsafe use

Anthropomorphising systems can lead to overreliance and unsafe use

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

Anticipated risk: "Natural language is a mode of communication particularly used by humans. Humans interacting with CAs may come to think of these agents as human-like and lead users to place undue confidence in these agents. For example, users may falsely attribute human-like characteristics to CAs such as holding a coherent identity over time, or being capable of empathy. Such inflated views of CA competen- cies may lead users to rely on the agents where this is not safe."(p. 220)

Supporting Evidence (1)

1.
"Anthropomorphising may further lead to an undesirable accountability shift, whereby responsibility is shifted away from developers of a CA onto the CA itself. This may distract and obscure responsibilities of the developers and reduce accountability [161]."(p. 220)

Part of Risk area 5: Human-Computer Interaction Harms

Other risks from Weidinger et al. (2022) (25)