Corporate power may impeded effective governance
AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality.
"The increasing power and influence of large corporations may make effective governance difficult. There exists a power asymmetry between corporate entities profiting from LLMs and other social groups (e.g. civil society). State-of-the-art LLMs are developed by or in partnership with, some of the world’s largest private tech companies...This poses a risk of governance protocols related to LLMs becoming excessively favorable to tech companies, potentially leading to regulatory capture at the cost of the interests of other societal groups, particularly marginalized communities who have historically been disproportionately affected by poorly designed AI technologies (Reventlow, 2021)."(p. 100)
Other risks from Anwar et al. (2024) (26)
Agentic LLMs Pose Novel Risks
7.2 AI possessing dangerous capabilitiesMulti-Agent Safety Is Not Assured by Single-Agent Safety
7.6 Multi-agent risksDual-Use Capabilities Enable Malicious Use and Misuse of LLMs
4.0 Malicious Actors & MisuseJailbreaks and Prompt Injections Threaten Security of LLMs
2.2 AI system security vulnerabilities and attacksVulnerability to Poisoning and Backdoors
2.2 AI system security vulnerabilities and attacksVulnerability to Poisoning and Backdoors > Natural Language Underspecifies Goals
7.1 AI pursuing its own goals in conflict with human goals or values