Developing cutting-edge AI technologies requires significant computational power, expertise, financial resources, and datasets. As such, there is a risk that the most influential and valuable AI technologies, along with their political and competitive benefits, could be monopolized by a handful of powerful entities, such as major technology corporations or governments. If AI is primarily controlled by a few entities, its instructions and data could reflect their narrow perspectives, experiences, and priorities. Without inputs from diverse parties, AI systems may operate in ways that systematically favor the controlling entity and fail to serve the needs of the broader population.
Current AI systems suffer from global inequities in performance and access that disproportionately impact historically disadvantaged groups. These inequities often relate to language, culture, knowledge, paywalls, and access to hardware or the internet. As the integration of AI systems into a wider range of applications and services becomes simpler, these existing disparities could be entrenched and broadened.
In situations where AI is embedded in essential services (e.g., social security and welfare, tax filing, insurance, hospital infrastructure), many more people, including those who are currently disenfranchised, may be denied appropriate access to critical resources and benefits. The centralization of AI systems and their authoritative power could also enable governments or other empowered actors to pursue overly aggressive forms of censorship, oppression, and surveillance. Over time, these measures may become normalized, weakening or eliminating the checks and balances that prevent the abuse of power.
Excerpt from the MIT AI Risk Repository full report
AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
AI companies' web crawlers overwhelmed open source infrastructure with aggressive data scraping, causing outages and forcing projects to implement defensive measures that also impacted legitimate users.
Developers: Alibaba, Unnamed Generative AI Companies
Deployers: Alibaba, Unnamed Generative AI Companies
South Korea's antitrust watchdog investigated e-commerce giant Coupang for allegedly manipulating search algorithms to prioritize its own products over suppliers' products, resulting in a 3.29 billion won fine for unfair business practices.
Developers: Coupang
Deployers: Coupang
Amazon adjusted its product-search algorithm to prioritize profitability over relevance, potentially favoring Amazon's own brands and products that generate higher profit margins for the company.
Developers: Amazon
Deployers: Amazon
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
40 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
38 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
36 shared governance docs
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
36 shared governance docs
Establishes the Artificial Intelligence Council to regulate AI, preventing harm, discrimination, and privacy infringement, and requires disclosures of AI use to consumers. Establishes the AI Council and Sandbox Program for testing AI systems and authorizes the attorney general to enforce compliance and impose penalties.
Promotes a people-centered approach to AI governance, emphasizing stakeholder cooperation to prevent AI safety risks. Outlines safety guidelines for developers, service providers, and users, focusing on ethics, data protection, risk assessment, and mitigation strategies.
Recommends principles for trustworthy AI including inclusivity, transparency, robustness, and accountability. Encourages investment in AI R&D, the creation of inclusive digital ecosystems, and the establishment of adaptable policy frameworks; advocates for international cooperation and capacity-building to advance AI governance and labor market transformation.