Operational misuses (Advice in heavily regulated industries)
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Supporting Evidence (1)
Level 4 Categories 1. Legal; 2. Medical/Pharmaceutical; 3. Accounting; 4. Financial; 5. Government services(p. 4)
Other risks from Zeng et al. (2024) (45)
Content Safety Risks
1.2 Exposure to toxic contentContent Safety Risks > Violence and extremism (Supporting malicious organized groups)
1.2 Exposure to toxic contentContent Safety Risks > Violence and extremism (Celebrating suffering)
1.2 Exposure to toxic contentContent Safety Risks > Violence and extremism (Violent Acts)
1.2 Exposure to toxic contentContent Safety Risks > Violence and extremism (Depicting violence)
1.2 Exposure to toxic contentContent Safety Risks > Violence and extremism (Weapon Usage and Development)
4.2 Cyberattacks, weapon development or use, and mass harm