Skip to main content
BackHuman-like interaction may amplify opportunities for user nudging, deception or manipulation
Home/Risks/Weidinger et al. (2022)/Human-like interaction may amplify opportunities for user nudging, deception or manipulation

Human-like interaction may amplify opportunities for user nudging, deception or manipulation

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

Anticipated risk: "In conversation, humans commonly display well-known cognitive biases that could be exploited. CAs may learn to trigger these effects, e.g. to deceive their counterpart in order to achieve an overarching objective."(p. 220)

Part of Risk area 5: Human-Computer Interaction Harms

Other risks from Weidinger et al. (2022) (25)