Distributional Shift
Risks from multi-agent interactions, due to incentives (which can lead to conflict or collusion) and/or the structure of multi-agent systems, which can create cascading failures, selection pressures, new security vulnerabilities, and a lack of shared information and trust.
"Distributional Shift. Individual ML systems can perform poorly in contexts different from those in which they were trained. A key source of these distributional shifts is the actions and adaptations of other agents (Narang et al., 2023; Papoudakis et al., 2019; Piliouras & Yu, 2022), which in single-agent approaches are often simply or ignored or at best modelled exogenously. Indeed, the sheer number and variance of behaviours that can be exhibited other agents means that multi-agent systems pose an especially challenging generalisation problem for individual learners (Agapiou et al., 2022; Leibo et al., 2021; Stone et al., 2010). While distributional shifts can cause issues in common-interest settings (see Section 2.1), they are more worrisome in mixed-motive settings since the ability of agents to cooperate depends not only on the ability to coordinate on one of many arbitrary conventions (which might be easily resolved by a common language), but on their beliefs about what solutions other agents will find acceptable"(p. 32)
Supporting Evidence (1)
"For example, training a negotiating agent on a distribution of counterparts with too little diversity in their negotiating tactics can lead to catastrophic overconfidence in high-stakes settings (cf. Stastny et al., 2021), which might already have little precedent in the training data. These issues may be aggravated by the fact that multi-agent systems can be highly dynamic (Papoudakis et al., 2019), as AI agents or their designers will be incentivised to continually adapt to the behaviour of other agents. These effects might also be exacerbated by the fact that models may come to be trained using data generated by other models (Alemohammad et al., 2023; Mart ́ınez et al., 2023; Shumailov et al., 2024, see also Section 3.3), though preliminary work suggests such concerns might be overblown (Gerstgrasser et al., 2024)."(p. 32)
Part of Destabilising Dynamics
Other risks from Hammond2025 (42)
Miscoordination
7.6 Multi-agent risksMiscoordination > Incompatible strategies
7.6 Multi-agent risksMiscoordination > Credit Assignment
7.6 Multi-agent risksMiscoordination > Limited Interactions
7.6 Multi-agent risksConflict
7.6 Multi-agent risksConflict > Social Dilemmas
7.6 Multi-agent risks