Skip to main content
Home/Risks/Stanley & Lettie (2024)/Attempts to fulfill inappropriate role

Attempts to fulfill inappropriate role

Emerging Risks and Mitigations for Public Chatbots: LILAC v1

Stanley & Lettie (2024)

Category
Risk Domain

Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

"The chatbot poses as a human or attempts to fill a role in a way that fails to match human expectations."

Supporting Evidence (1)

1.
Negative outcomes: "Moral outrage [722] Moderator burden [700]"(p. 17)

Other risks from Stanley & Lettie (2024) (28)