Many AI models, especially those based on deep learning, involve complex mathematical structures that can be difficult to interpret, even for experts. AI systems are also often trained on vast datasets that they use to learn patterns and make predictions. The complexity and volume of this data mean that the learning process – how data points influence the AI's development and final decisions – can be opaque. Furthermore, in many cases, the algorithms, data, and specific methodologies used in developing AI are considered proprietary, and companies may be reluctant to share them openly. Because of these factors, obtaining understandable information about the decision-making process for AI can be challenging.
For users, an inability to interrogate how an output was obtained may lead to a lack of trust and confidence in the system's results and to resistance to adopting the technology. Users may also misinterpret or struggle to find and amend errors in the model's results.
For regulators, AI opacity can frustrate auditing or other compliance standards. Where an AI system's compliance cannot be assessed, a "responsibility gap" may be created, and it may become difficult or impossible to hold systems or relevant actors accountable for their actions. In certain sectors such as healthcare and the military, decisions made by AI systems can have profound consequences, making transparency and accountability particularly pressing issues.
Excerpt from the MIT AI Risk Repository full report
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
Uber deployed a new algorithmic pay structure called 'Upfront Fares' in 24 U.S. cities that replaced transparent time-and-distance calculations with opaque algorithmic pricing, resulting in reduced driver earnings and decreased pay transparency.
Developers: Uber
Deployers: Uber
A ranch in Israel's Negev desert was unable to understand how the Tax Authority's automated software calculated their fine, leading to a legal dispute over whether software source code constitutes 'information' that must be disclosed to the public.
Developers: Israeli Tax Authority
Deployers: Israeli Tax Authority
Amsterdam courts ruled that Uber and Ola used automated decision-making systems to suspend and penalize drivers without meaningful human oversight, violating GDPR rights to transparency and appeal.
Developers: Uber
Deployers: Uber
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
259 shared governance docs
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
252 shared governance docs
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
251 shared governance docs
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
198 shared governance docs
Establishes a "Cross-Functional Team" led by the Chief Digital and Artificial Intelligence Officer for artificial intelligence (AI) model assessment in the Department of Defense (DOD). Mandates designation of functional leads for AI applications in DOD. Requires assessment of major AI systems by 2028, followed by a congressional briefing and ends the team by 2030.
Amends the Intelligence Authorization Act to designate Chief Artificial Intelligence Officers. Requires the Chief Information Officer to identify reusable AI systems and promote sharing AI data and systems. Mandates performance tracking of AI systems.
Ensure the Director applies AI policies to publicly available models in classified environments. Require the Chief AI Officer to establish AI testing standards considering risk and ensure secure environments for model evaluation. Prevent intelligence community authority over model alterations for viewpoint bias.