Decisions made during the development of an algorithmic system and the content, quality, and diversity of the training data can significantly impact which people and experiences the system can effectively understand, represent, and accommodate. Biases and limitations introduced through these factors can lead to models that perform significantly worse for certain subpopulations compared with others, especially those defined by disability, gender identity, race, social status, and ethnicity. For example, when LLMs are trained on a small number of languages, they can underperform for others. The underperformance of algorithmic systems for certain groups may lead to a range of negative consequences such as the reduced ability or complete inability to use and benefit from the system; increased effort or challenges in using it effectively; feelings of alienation, frustration, and exclusion due to the lack of inclusive design; and ultimately, unequal outcomes across various domains.
Excerpt from the MIT AI Risk Repository full report
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
Facial recognition technology used by UK police was found to have significantly higher false positive rates for Black and Asian people compared to white people, with Black women experiencing the highest error rates at 9.9%.
Developers: Unknown Facial Recognition Technology Developers
Deployers: Home Office, Metropolitan Police, Government Of The United Kingdom, Law Enforcement, British Law Enforcement
Nevada implemented an AI system from Infinite Campus to identify at-risk students for funding allocation, which reduced the number of classified at-risk students from over 270,000 to less than 65,000, causing schools to lose funding and scramble to cut programs.
Developers: Infinite Campus
Deployers: Nevada Department Of Education
Police departments across 15 states used facial recognition software in over 1,000 criminal investigations, frequently failing to disclose this use to defendants, leading to wrongful arrests including at least seven innocent Americans, six of whom were Black.
Developers: Clearview AI
Deployers: Police Departments, Evansville Pd, Pflugerville Pd, Jefferson Parish Sheriff's Office, Miami Pd, West New York Pd, NYPD, Coral Springs Pd, Arvada Pd
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
177 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
169 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
168 shared governance docs
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
142 shared governance docs
Instructs federal agencies to prioritize AI innovation, governance, and public trust while removing bureaucratic barriers. Requires development of an AI strategy and appointment of a Chief AI Officer. Mandates risk management practices for high-impact AI applications and ensures transparency and accountability.
Requires healthcare entities using AI in California to comply with consumer protection, civil rights, competition, and data privacy laws. Ensures AI doesn't override doctor decisions, discriminate, or infringe patient privacy. Prohibits AI-driven practices like denying coverage based on stereotypes.
Prohibits algorithmic discrimination under New Jersey's Law Against Discrimination (LAD) by covered entities. Holds entities liable for discriminatory outcomes from automated tools. Addresses disparate treatment, disparate impact, and failure to provide reasonable accommodations related to AI use.