AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
"Although competition between companies can be beneficial, creating more useful products for consumers, there are also pitfalls. First, the benefits of economic activity may be unevenly distributed, incentivizing those who benefit most from it to disregard the harms to others. Second, under intense market competition, businesses tend to focus much more on short-term gains than on long-term outcomes. With this mindset, companies often pursue something that can make a lot of profit in the short term, even if it poses a societal risk in the long term."(p. 17)
Supporting Evidence (14)
"Economic Competition Undercuts Safety"(p. 18)
"Competitive pressure is fueling a corporate AI race."(p. 18)
"Competitive pressures have contributed to major commercial and industrial disasters. "(p. 18)
"Competition incentivizes businesses to deploy potentially unsafe AI systems"(p. 18)
"Corporations will face pressure to replace humans with AIs."(p. 19)
"AIs could lead to mass unemployment."(p. 19)
"Automated AI R&D.AI agents would have the potential to automate the research and development (R&D) of AI itself. AI is increasingly automating parts of the research process [57], and this could lead to AI capabilities growing at increasing rates, to the point where humans are no longer the driving force behind AI development. If this trend continues unchecked, it could escalate risks associated with AIs progressing faster than our capacity to manage and regulate them."(p. 19)
"Conceding power to AIs could lead to human enfeeblement. "(p. 19)
"Evolutionary Pressures...there are strong pressures to replace humans with AIs, cede more control to them, and reduce human oversight in various settings, despite the potential harms. We can re-frame this as a general trend resulting from evolutionary dynamics...an unfortunate truth is that AIs will simply be more fit than humans...it is likely that we will build an ecosystem of competing AIs over which it may be difficult to maintain control in the long run. We will now discuss how natural selection influences the development of AI systems and why evolution favors selfish behaviors. We will also look at how competition might arise and play out between AIs and humans, and how this could create catastrophic risks"(p. 20)
"Selfish behaviors may erode safety measures that some of us implement. AIs that gain influence and provide economic value will predominate, while AIs that adhere to the most constraints will be less competitive. For example, AIs following the constraint “never break the law” have fewer options than AIs following the constraint “don’t get caught breaking the law.""(p. 21)
"Humans only have nominal influence over AI selection. One might think we could avoid the development of selfish behaviors by ensuring we do not select AIs that exhibit them. However, the companies developing AIs are not selecting the safest path but instead succumbing to evolutionary pressures. "(p. 21)
"AIs can be more fit than humans....Given the exponential increase in microprocessor speeds, AIs have the potential to process information and “think” at a pace that far surpasses human neurons, but it could be even more dramatic than the speed difference between humans and sloths—possibly more like the speed difference between humans and plants. "(p. 22)
"AIs would have little reason to cooperate with or be altruistic toward humans. Cooperation and altruism evolved because they increase fitness. There are numerous reasons why humans cooperate with other humans, like direct reciprocity. Also known as “quid pro quo,” direct reciprocity can be summed up by the idiom “you scratch my back, I’ll scratch yours.” While humans would initially select AIs that were cooperative, the natural selection process would eventually go beyond our control, once AIs were in charge of many or most processes, and interacting predominantly with one another. At that point, there would be little we could offer AIs, given that they will be able to “think” at least hundreds of times faster than us. Involving us in any cooperation or decision-making processes would simply slow them down, giving them no more reason to cooperate with us than we do with gorillas."(p. 22)
"AIs becoming more powerful than humans could leave us highly vulnerable. As the most dominant species, humans have deliberately harmed many other species, and helped drive species such as woolly mammoths and Neanderthals to extinction. In many cases, the harm was not even deliberate, but instead a result of us merely prioritizing our goals over their wellbeing. To harm humans, AIs wouldn’t need to be any more genocidal than someone removing an ant colony on their front lawn. If AIs are able to control the environment more effectively than we can, they could treat us with the same disregard"(p. 22)
Part of AI Race (Environmental/Structural)
Other risks from Hendrycks, Mazzeika & Woodside (2023) (13)
Malicious Use (Intentional)
4.0 Malicious Actors & MisuseMalicious Use (Intentional) > Bioterrorism
4.2 Cyberattacks, weapon development or use, and mass harmMalicious Use (Intentional) > Unleashing AI Agents
4.2 Cyberattacks, weapon development or use, and mass harmMalicious Use (Intentional) > Persuasive AIs
4.1 Disinformation, surveillance, and influence at scaleMalicious Use (Intentional) > Concentration of Power
6.1 Power centralization and unfair distribution of benefitsAI Race (Environmental/Structural)
6.4 Competitive dynamics