Rigidity and Mistaken Commitments
Risks from multi-agent interactions, due to incentives (which can lead to conflict or collusion) and/or the structure of multi-agent systems, which can create cascading failures, selection pressures, new security vulnerabilities, and a lack of shared information and trust.
"Rigidity and Mistaken Commitments. Even when it is desirable to be able to make threats in order to deter socially harmful behaviour, doing so using AI agents effectively removes the human from the loop, which could prove disastrous in high-stakes contexts (e.g., a false positive in a nuclear sub- marine’s warning system; see also Case Study 11), or when irresponsible actors are enabled in making disproportionate or mistaken commitments."(p. 34)
Supporting Evidence (2)
"On the other hand, such commitments may only be credible to the extent that a human cannot intervene, increasing the incentive for delegation to AI agents. This could be worsened if other, potentially incompatible commitments can be made by other actors, leading to a ‘commitment race’ (Kokotajlo, 2019) or potential conflict. In complex networks (see Section 3.2), commitments triggered by a small number of agents could – without careful planning – cascade through the network and have a far more damaging effect (Xia & Conitzer, 2010)."(p. 35)
"During the Cold War, the Soviet Union developed the the automated Perimeter system – of- ten called ‘Dead Hand’ – to guarantee a nuclear launch if its leadership were incapacitated, thus ensuring a credible commitment of retaliation (Hoffman, 2009). While this mechanism was intended as a deterrent, its automatic and largely irrevocable nature exemplifies how credible commitments can become dangerously dual-use: once triggered, there would be little chance to override or de-escalate. In a similar vein, during Operation Iraqi Freedom in 2003 an automated US missile defence system shot down a British plane, killing both occupants (Borg et al., 2024; Talbot, 2005). While the system’s operators had one minute to override the system (even in its autonomous mode), they decided to trust its judgment, resulting in a tragic outcome. In more general AI contexts, similarly inflexible commitments could offer short-term advantages or trust but risk uncontrolled escalation, lock-in, and catastrophic outcomes if not carefully designed with appropriate fail-safes and oversight."(p. 35)
Part of Commitment and Trust
Other risks from Hammond2025 (42)
Miscoordination
7.6 Multi-agent risksMiscoordination > Incompatible strategies
7.6 Multi-agent risksMiscoordination > Credit Assignment
7.6 Multi-agent risksMiscoordination > Limited Interactions
7.6 Multi-agent risksConflict
7.6 Multi-agent risksConflict > Social Dilemmas
7.6 Multi-agent risks