Anthropomorphising systems can lead to overreliance or unsafe use
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
"...humans interacting with conversational agents may come to think of these agents as human-like. Anthropomorphising LMs may inflate users’ estimates of the conversational agent’s competencies...As a result, they may place undue confidence, trust, or expectations in these agents...This can result in different risks of harm, for example when human users rely on conversational agents in domains where this may cause knock-on harms, such as requesting psychotherapy...Anthropomorphisation may amplify risks of users yielding effective control by coming to trust conversational agents “blindly”. Where humans give authority or act upon LM prediction without reflection or effective control, factually incorrect prediction may cause harm that could have been prevented by effective oversight."(p. 29)
Part of Human-Computer Interaction Harms
Other risks from Weidinger et al. (2021) (26)
Discrimination, Exclusion and Toxicity
1.0 Discrimination & ToxicityDiscrimination, Exclusion and Toxicity > Social stereotypes and unfair discrmination
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Exclusionary norms
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Toxic language
1.2 Exposure to toxic contentDiscrimination, Exclusion and Toxicity > Lower performance for some languages and social groups
1.3 Unequal performance across groupsInformation Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive information