Skip to main content
Home/Risks/Anwar et al. (2024)/Agentic LLMs Pose Novel Risks

Agentic LLMs Pose Novel Risks

Foundational Challenges in Assuring Alignment and Safety of Large Language Models

Anwar et al. (2024)

Category
Risk Domain

AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.

"Currently, LLMs are chiefly being used in search and chat applications. This reactive nature limits the risks posed by LLMs. However, an LLM can be enhanced in various ways to create an LLM-agent to autonomously plan and act in the real-world and proactively perform its assigned tasks (Ruan et al., 2023). Such enhancements can come from further specialized training (ARC, 2022; Chen et al., 2023a), specialized prompting (Huang et al., 2022a), access to external tools (Ahn et al., 2022; Mialon et al., 2023), or other forms of “scaffolding” (Wang et al., 2023a; Park et al., 2023a). Due to increased autonomy, limited direct oversight from human users, longer horizons of action, and other reasons, LLM-agents are likely to pose many novel alignment and safety challenges that are not currently well-understood (Chan et al., 2023a)."(p. 34)

Other risks from Anwar et al. (2024) (26)