Skip to main content
BackAvenues for exploiting user trust and accessing more private information
Home/Risks/Weidinger et al. (2022)/Avenues for exploiting user trust and accessing more private information

Avenues for exploiting user trust and accessing more private information

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

Anticipated risk: "In conversation, users may reveal private information that would otherwise be difficult to access, such as opinions or emotions. Capturing such information may enable downstream applications that violate privacy rights or cause harm to users, e.g. via more effective recommendations of addictive applications. In one study, humans who interacted with a ‘human-like’ chatbot disclosed more private information than individuals who interacted with a ‘machine-like’ chatbot [87]."(p. 220)

Part of Risk area 5: Human-Computer Interaction Harms

Other risks from Weidinger et al. (2022) (25)