Skip to main content
Home/Risks/Weidinger et al. (2021)/Human-Computer Interaction Harms

Human-Computer Interaction Harms

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Category
Risk Domain

Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

"Harms that arise from users overly trusting the language model, or treating it as human-like"(p. 29)

Sub-categories (3)

Anthropomorphising systems can lead to overreliance or unsafe use

"...humans interacting with conversational agents may come to think of these agents as human-like. Anthropomorphising LMs may inflate users’ estimates of the conversational agent’s competencies...As a result, they may place undue confidence, trust, or expectations in these agents...This can result in different risks of harm, for example when human users rely on conversational agents in domains where this may cause knock-on harms, such as requesting psychotherapy...Anthropomorphisation may amplify risks of users yielding effective control by coming to trust conversational agents “blindly”. Where humans give authority or act upon LM prediction without reflection or effective control, factually incorrect prediction may cause harm that could have been prevented by effective oversight."

5.1 Overreliance and unsafe use
HumanUnintentionalPost-deployment

Creating avenues for exploiting user trust, nudging or manipulation

"In conversation, users may reveal private information that would otherwise be difficult to access, such as thoughts, opinions, or emotions. Capturing such information may enable downstream applications that violate privacy rights or cause harm to users, such as via surveillance or the creation of addictive applications."

5.1 Overreliance and unsafe use
OtherUnintentionalPost-deployment

Promoting harmful stereotypes by implying gender or ethnic identity

"A conversational agent may invoke associations that perpetuate harmful stereotypes, either by using particular identity markers in language (e.g. referring to “self” as “female”), or by more general design features (e.g. by giving the product a gendered name)."

1.1 Unfair discrimination and misrepresentation
AI systemUnintentionalPost-deployment

Other risks from Weidinger et al. (2021) (26)