Humans hold inaccurate and overgeneralized beliefs about the characteristics, behaviors, and attributes of members of certain social groups. These stereotypical beliefs and the behavior that follows from them can misrepresent, exclude, demean, and disadvantage the individuals to whom they apply, reinforcing existing inequality. Human belief and behaviors shape every part of the design, development, and deployment of AI. Humans program AI systems, provide training data, and decide how data is processed and stored. As a result, AI models can encode associations that promote and amplify biased or discriminatory beliefs and behaviors. In decision systems, erroneous associations can systematically disadvantage certain groups. This may result in harmful decisions such as wrongful rejection of loan or mortgage applications, discriminatory hiring practices that exclude qualified candidates, or the misidentification and unjust arrest of individuals in law enforcement contexts. In text and image models, biased inputs can manifest in outputs that reinforce harmful stereotypes and prejudices that paint certain groups and individuals "... as lower status and less deserving of respect".
Excerpt from the MIT AI Risk Repository full report
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
OpenAI's Sora AI video generator perpetuates sexist, racist, and ableist stereotypes, depicting biased portrayals of professions, relationships, and identities based on analysis of 250 generated videos.
Developers: OpenAI
Deployers: OpenAI
Meta deployed AI character accounts on Instagram and Facebook that misrepresented themselves as real people with specific racial and sexual identities, leading to public backlash and accusations of digital blackface before Meta removed the accounts.
Developers: Meta
Deployers: Meta
An AI image expansion tool used by conference organizers inappropriately altered a woman's professional photo by unbuttoning her shirt and adding suggestive content including a bra underneath.
Developers: Unknown Developer
Deployers: Unknown Conference Employee
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
198 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
197 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
181 shared governance docs
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
162 shared governance docs
Establishes an AI Litigation Task Force to challenge state regulations hindering United States Artificial Intelligence (AI) dominance. Directs an evaluation to identify state laws that mandate ideological bias or alter truthful model outputs. Restricts state access to federal funding, such as the Broadband Equity Access and Deployment program, unless states comply with a proposed national policy framework designed to preempt conflicting state-level AI mandates.
Establishes the Artificial Intelligence Council to regulate AI, preventing harm, discrimination, and privacy infringement, and requires disclosures of AI use to consumers. Establishes the AI Council and Sandbox Program for testing AI systems and authorizes the attorney general to enforce compliance and impose penalties.
Requires personalized algorithmic pricing providers to disclose their use of personalized pricing and the personal data involved. Directs the Attorney General to enforce compliance, including issuing cease-and-desist orders and seeking injunctions for violations.