Skip to main content
Home/Risks/Hammond2025/Social Engineering at Scale

Social Engineering at Scale

Sub-category
Risk Domain

Risks from multi-agent interactions, due to incentives (which can lead to conflict or collusion) and/or the structure of multi-agent systems, which can create cascading failures, selection pressures, new security vulnerabilities, and a lack of shared information and trust.

"Social Engineering at Scale. Advanced AI agents will be more easily able to interact with large numbers of humans, and vice versa. This provides a wider attack surface for various forms of automated social engineering (Ai et al., 2024). For example, coordinated agents could use advanced surveillance tools and produce personalized phishing or manipulative content at scale, adjusting their tactics based on user feedback (Figueiredo et al., 2024; Hazell, 2023). A large number of subtle interactions with a range of seemingly independent AI agents might be more likely to lead to someone being persuaded or manipulated compared to an interaction with a single agent. Moreover, splitting these efforts among many specialized agents could make it harder for corporate or personal security measures to detect and neutralize such campaigns."(p. 40)

Part of Multi-Agent Security

Other risks from Hammond2025 (42)