Interaction risks
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Many novel risks posed by generative AI stem from the ways in which humans interact with these systems. For instance, sources discuss epistemic challenges in distinguishing AI-generated from human content. They also address the issue of anthropomorphization, which can lead to an excessive trust in generative AI systems. On a similar note, many papers argue that the use of conversational agents could impact mental well-being or gradually supplant interpersonal communication, potentially leading to a dehumanization of interactions. Additionally, a frequently discussed interaction risk in the literature is the potential of LLMs to manipulate human behavior or to instigate users to engage in unethical or illegal activities.(p. 6)
Other risks from Hagendorff (2024) (16)
Fairness - Bias
1.1 Unfair discrimination and misrepresentationSafety
7.1 AI pursuing its own goals in conflict with human goals or valuesHarmful Content - Toxicity
1.2 Exposure to toxic contentHallucinations
3.1 False or misleading informationPrivacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationSecurity - Robustness
2.2 AI system security vulnerabilities and attacks