Governance failure refers to the risks and harms that arise when institutional, regulatory, and policy mechanisms fall short of effectively managing and overseeing the development and deployment of AI systems. Several issues make robust AI governance challenging to implement.
First, it is difficult to determine who is responsible or liable when AI systems fail or make decisions that result in negative consequences. At present, there exists no comprehensive framework specifically designed to assign legal responsibility to AI agents. Traditional legal principles are based on human actors, whose intentions and actions can generally be identified and judged. AI's decision-making, on the other hand, is often unpredictable, opaque, and involves complex interactions between millions of parameters. In the absence of a regulatory or legal incentive to take safety engineering seriously, developers may release poorly designed AI systems, and people harmed by those systems may be left without recourse.
A second challenge for effective AI governance is the rapid pace at which AI systems evolve. Typical governance and policy processes are inherently slow. The mismatch between the speed of AI advancements and their regulation may result in immature regulations that overlook important aspects of AI governance.
A third challenge for effective governance is an inability to influence AI developers and deployers to take safe actions. Frequently, this inability is driven by an asymmetry of information between technology companies and regulators. Technology companies often have far better knowledge about the capabilities, functioning, and potential uses of their AI systems. Without access to this knowledge, regulators can find it difficult to craft targeted rules that address the specific challenges posed by AI.
Excerpt from the MIT AI Risk Repository full report
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
OpenAI's ChatGPT flagged a user's violent content describing gun violence scenarios but the company decided not to alert law enforcement, and the user later committed a mass shooting killing 8 people and injuring 25.
Developers: OpenAI
Deployers: OpenAI
Axon Enterprise announced plans to develop taser-equipped drones for schools to prevent mass shootings, but halted the project after its AI ethics board objected and eight of twelve members resigned in protest.
Developers: Axon Enterprise
Deployers: None
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
304 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
259 shared governance docs
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
252 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
236 shared governance docs
Requires the Secretary of Defense, via the Chief Digital and Artificial Intelligence Officer, to establish an AI sandbox task force by April 2026 to facilitate AI experimentation and deployment. Identifies members and duties, with termination by January 2030.
Establishes the Artificial Intelligence Futures Steering Committee by April 1, 2026, under the Secretary of Defense. Directs it to develop policies for AI adoption, assess AI trajectories, and analyze AI risks and adversary developments. Requires quarterly meetings and a report to U.S. Congress by January 31, 2027.
Limits funding for AI research within the National Nuclear Security Administration to nuclear security missions. Allows AI research programs elsewhere within the Department of Energy or other Federal agencies, provided it doesn't interfere with nuclear security missions or facilities.