Network Effects
Risks from multi-agent interactions, due to incentives (which can lead to conflict or collusion) and/or the structure of multi-agent systems, which can create cascading failures, selection pressures, new security vulnerabilities, and a lack of shared information and trust.
"Network effects (Section 3.2): minor changes in properties or connection patterns of agents in a network can lead to dramatic changes in the behaviour of the whole group;"(p. 7)
Supporting Evidence (3)
"Many of the complex systems critical to human society can be understood as networks, including trans- portation, social interactions, trade, biological ecosystems, and communication, among others (Barab ́asi & P ́osfai, 2016; Jackson & Zenou, 2015; Newman & Newman, 2018). Networks consist of nodes (such as people, organisations, or resources) and connections (such as communication channels, infrastructural dependencies, or exchanges of goods and services). Network effects refer to consequences of the intricate relationships between the properties of individual connections and nodes, connectivity patterns, and the behaviours exhibited by the network as a whole (Siegenfeld & Bar-Yam, 2020)."(p. 23)
"The ongoing integration of AI capabilities into a wide range of existing networks, both virtual and phys- ical, is rapidly transforming the way our interconnected world operates. From business communication systems and financial trading networks to smart energy grids and logistical networks (Camacho et al., 2024; Ferreira et al., 2021; Mayorkas, 2024), entities or communication channels that were once controlled by humans are increasingly becoming AI-powered. This shift represents a systemic change in the way business, social, and technological networks operate, promising significantly improved efficiency and a greater diffusion of benefits from advanced AI, while also introducing novel risks."(p. 23)
"This underlying structure means that a networked system can suffer from a range of failure modes that individual, disconnected systems do not, such as the spread of malfunctions, phase transitions, and undesirable clustering or homogeneities (Cohen & Havlin, 2010). Importantly, a system’s behaviour within a network often differs from its behaviour when characterised independently.22 Non-AI examples of these phenomena include power grid blackouts (Buldyrev et al., 2010; Shakarian et al., 2013), flash crashes (Elliott et al., 2014; Paulin et al., 2019, see also Case Study 10), ecosystem collapse (Bascompte & Stouffer, 2009; Gao et al., 2016), or political unrest and conflict (Forsberg, 2008; Wood, 2008)."(p. 23)
Sub-categories (3)
Error propagation
"Error Propagation. One well-known issue with communication networks is that information can be corrupted as it propagates through the network.24 As AI systems become capable of generating and processing more and more kinds of information, AI agents could end up ‘polluting the epistemic commons’ (Huang & Siddarth, 2023; Kay et al., 2024) of both other agents (Ju et al., 2024) and humans (see Case Study 7 and Section 3.1) Another increasingly important framework is the use of individual AI agents as part of teams and scaffolded chains of delegation, which transmit not only information but instructions or goals through networks of agents. If these goals are distorted or corrupted, then this can lead to worse outcomes for the delegating agent(s) (Nguyen et al., 2024b; Sourbut et al., 2024). Finally, while the previous examples are phrased in terms of unintentional errors, it may be that certain network structures allow – or perhaps even encourage – the spread of errors that are deliberately introduced by malicious agents (Gu et al., 2024; Ju et al., 2024; Lee & Tiwari, 2024, see also Case Study 8)."
7.6 Multi-agent risksNetwork rewiring
"Network Rewiring. A different class of problems concerns not changes in the content transmitted through the network but changes in the network structure itself (Albert et al., 2000)."
7.6 Multi-agent risksHomogeneity and correlated failures
"Homogeneity and Correlated Failures. The current paradigm driving the state of the art in AI is the ‘foundation model’ (Bommasani et al., 2021): large-scale ML models pre-trained on broad data, which can be repurposed for a wide range of downstream applications. The costs required to create such models (and continuing returns to scale) means that only well-resourced actors can create cutting- edge models (Epoch, 2023; Hoffmann et al., 2022; Kaplan et al., 2020), making them relatively few in number. If current trends continue, it is likely that many AI agents will be powered by a small number of similar underlying models.28"
7.6 Multi-agent risksOther risks from Hammond2025 (42)
Miscoordination
7.6 Multi-agent risksMiscoordination > Incompatible strategies
7.6 Multi-agent risksMiscoordination > Credit Assignment
7.6 Multi-agent risksMiscoordination > Limited Interactions
7.6 Multi-agent risksConflict
7.6 Multi-agent risksConflict > Social Dilemmas
7.6 Multi-agent risks