Skip to main content
Home/Risks/Clarke2023/Worsened conflict

Worsened conflict

A Survey of the Potential Long-term Impacts of AI: How AI Could Lead to Long-term Changes in Science, Cooperation, Power, Epistemics and Values

Category
Risk Domain

AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.

"Cooperation and conflict: we’re seeing more focus and investment on the kinds of AI capabilities that make conflict more likely and severe, rather than those likely to improve cooperation. So, on our current trajectory, AI seems more likely to have negative long-term impacts in this area."(p. 9)

Sub-categories (4)

AI enables development of weapons of mass destruction

"AI is already enabling the development of weapons which could cause mass destruction —including new weapons that themselves use AI capabilities, such as Lethal Autonomous Weapons [2],10 and the potential use of AI to speed up the development of other potentially dangerous technologies, such as engineered pathogens (as discussed in Section 2)."

4.2 Cyberattacks, weapon development or use, and mass harm
HumanIntentionalPost-deployment

AI enables automation of military decision-making

"One concern here is humans not remaining in the loop for some military decisions, creating the possibility of unintentional escalation because of: • Automated tactical decision-making, by ‘in-theatre’ AI systems (e.g. border patrol systems start accidentally firing on one another), leading to either: tactical-level war crimes,11 or strategic-level decisions to initiate conflict or escalate to a higher level of intensity—for example, countervalue (e.g. city-) targeting, or going nuclear [62]. • Automated strategic decision-making, by ‘out-of-theatre’ AI systems—for example, conflict prediction or strategic planning systems giving a faulty ‘imminent attack’ warning [20]."

5.2 Loss of human agency and autonomy
AI systemUnintentionalPost-deployment

AI-induced strategic instability

"For example, AI could undermine nuclear strategic stability by making it easier to discover and destroy previously secure nuclear launch facilities [30, 46, 49]. AI may also offer more extreme first-strike advantages or novel destructive capabilities that could disrupt deterrence, such as cyber capabilities being used to knock out opponents’ nuclear command and control [15, 29]. The use of AI capabilities may make it less clear where attacks originate from, making it easier for aggressors to obfuscate an attack, and therefore reducing the costs of initiating one. By making it more difficult to explain their military decisions, AI may give states a carte blanche to act more aggressively [20]. By creating a wider and more vulnerable attack surface, AI-related infrastructure may make war more tempting by lowering the cost of offensive action (for example, it might be sufficient to attack just data centres to do substantial harm), or by creating a ‘use-them-or- lose-them’ dynamic around powerful yet vulnerable military AI systems. In this way, AI could exacerbate the ‘capability- vulnerability paradox’ [22], where the very digital technologies that make militaries effective on the battlefield also introduce critical new vulnerabilities."

5.2 Loss of human agency and autonomy
AI systemUnintentionalPost-deployment

Resource conflicts driven by AI development

"AI development may itself become a new flash point for conflicts—causing more conflict to occur— especially conflicts over AI-relevant resources (such as data centres, semiconductor manufacturing facilities and raw materials)."

6.4 Competitive dynamics
OtherUnintentionalPre-deployment

Other risks from Clarke2023 (19)