AI technology has the potential to redefine power dynamics across economic, political, and social spheres. As a result, many countries and corporations are investing heavily in AI research and development with the goal of becoming leaders in the area. While market competition can lead to beneficial economic and consumer outcomes, it also presents various risks, particularly in the field of AI. In intensely competitive markets, AI developers and deployers may have an incentive to prioritize short-term, internal goals (e.g., profit or influence) to "secure their positions and survive", at the expense of external goals that encourage longer-term societal well-being.
A key concern is that AI companies may cut safety corners, releasing insecure and error-prone systems in a bid to stay ahead. These immature systems may present risks that are hard to identify and evaluate. Akin to the fossil fuel industry, profit-focused developers may allow their technologies to cause widespread externalities, such as "pollution, resource depletion, mental illness, misinformation, or injustice". Countries or other state-like actors may engage in an AI-enabled military arms race, which could encourage the making of bad bets with a high potential for harm.
Excerpt from the MIT AI Risk Repository full report
AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
South Korea's Fair Trade Commission fined Naver 26.7 billion won for manipulating search algorithms between 2012-2015 to favor its own shopping platform and payment service over competitors.
Developers: Naver
Deployers: Naver
Apple modified its App Store ranking algorithm to combat manipulation by third-party services, causing significant ranking changes for Chinese apps between March 21-26, with some legitimate apps dropping hundreds of positions while others rose substantially.
Developers: Apple
Deployers: Apple
The National Residency Matching Program (NRMP) uses a computer algorithm to match medical residents to hospitals, addressing timing problems in the medical labor market that historically led to increasingly early appointment dates and frenzied decision-making processes.
Developers: National Resident Matching Program
Deployers: National Resident Matching Program
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
242 shared governance docs
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
180 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
169 shared governance docs
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
136 shared governance docs
Authorize the Secretary of Defense to establish AI Institutes focused on national security. Directs support for interdisciplinary AI research, partnership, innovation ecosystems, and workforce development.
Facilitates integration of commercial AI for logistics into two Department of Defense exercises in 2026. Directs the Secretary of Defense to brief Congress on exercise specifics and AI integration impact on readiness and operations.
Requires the Secretary of Defense, via the Chief Digital and Artificial Intelligence Officer, to establish an AI sandbox task force by April 2026 to facilitate AI experimentation and deployment. Identifies members and duties, with termination by January 2030.