Skip to main content
Home/Risks/Clarke2023/AI-induced strategic instability

AI-induced strategic instability

A Survey of the Potential Long-term Impacts of AI: How AI Could Lead to Long-term Changes in Science, Cooperation, Power, Epistemics and Values

Sub-category
Risk Domain

Delegating by humans of key decisions to AI systems, or AI systems that make decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory, or becoming cognitively enfeebled.

"For example, AI could undermine nuclear strategic stability by making it easier to discover and destroy previously secure nuclear launch facilities [30, 46, 49]. AI may also offer more extreme first-strike advantages or novel destructive capabilities that could disrupt deterrence, such as cyber capabilities being used to knock out opponents’ nuclear command and control [15, 29]. The use of AI capabilities may make it less clear where attacks originate from, making it easier for aggressors to obfuscate an attack, and therefore reducing the costs of initiating one. By making it more difficult to explain their military decisions, AI may give states a carte blanche to act more aggressively [20]. By creating a wider and more vulnerable attack surface, AI-related infrastructure may make war more tempting by lowering the cost of offensive action (for example, it might be sufficient to attack just data centres to do substantial harm), or by creating a ‘use-them-or- lose-them’ dynamic around powerful yet vulnerable military AI systems. In this way, AI could exacerbate the ‘capability- vulnerability paradox’ [22], where the very digital technologies that make militaries effective on the battlefield also introduce critical new vulnerabilities."(p. 4)

Part of Worsened conflict

Other risks from Clarke2023 (19)