As AI systems become increasingly capable and intelligent, humans may be tempted to delegate many of their decisions and actions to AI. Although such delegation can be beneficial (e.g., by saving time or money), it may lead to undesirable outcomes where unconstrained or inappropriate. For example, if AIs take over tasks that typically require human creativity and analytical thinking, humans may engage less frequently in these cognitive processes. Over time, this may lead to a decrease in our ability to think critically and solve problems independently.
As individuals become more reliant on AI for everyday decisions – from what to eat and how to spend to more significant choices like career and relationships – there is a risk that they will lose their sense of free will and autonomy. If AIs begin to shape a person's life path in ways that do not align with their original aspirations and desires, this could limit their personal growth and prevent the pursuit of a fulfilling life.
At a societal level, organizations may hand over control to AI systems to stay competitive or reduce costs. If a significant number of organizations adopt AI systems and automate decision-making processes, especially in a way that is opaque and difficult to challenge, it could lead to widespread job displacement and a growing sense of helplessness among the general population.
Excerpt from the MIT AI Risk Repository full report
Delegating by humans of key decisions to AI systems, or AI systems that make decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory, or becoming cognitively enfeebled.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
Two men blocked a Waymo autonomous taxi in San Francisco and harassed the female passenger inside, demanding her phone number while the self-driving car remained immobilized in the street.
Developers: Waymo
Deployers: Waymo
Beverly Hills police officers intentionally played copyrighted music during interactions with a citizen activist to trigger Instagram's copyright detection algorithms and disrupt his live streams documenting police interactions.
Developers: Instagram
Deployers: Instagram
The Honolulu Police Department spent $150,000 in federal COVID relief funds to purchase a Boston Dynamics Spot robot to take temperatures of homeless individuals at encampments, raising concerns about dehumanization and potential future surveillance uses.
Developers: Boston Dynamics
Deployers: Honolulu Police Department
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
159 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
155 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
155 shared governance docs
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
137 shared governance docs
Establishes the Artificial Intelligence Futures Steering Committee by April 1, 2026, under the Secretary of Defense. Directs it to develop policies for AI adoption, assess AI trajectories, and analyze AI risks and adversary developments. Requires quarterly meetings and a report to U.S. Congress by January 31, 2027.
Defines "companion chatbot" and requires operators to notify users when they interact with AI. Requires protocols to prevent the production of harmful content. Mandates annual reports on crisis notifications. Offers civil remedies for violations. Ensures suitability disclosures for minors.
Requires large frontier developers to implement and publish frontier AI frameworks, assess catastrophic risks, and publish transparency reports; requires the Office of Emergency Services to establish reporting mechanisms for critical safety incidents and catastrophic risk assessments; establishes a consortium to develop a framework for the creation of CalCompute; creates civil penalties for violations of this chapter.