Multiple AI systems, including word-embedding algorithms and beauty contest judging systems, exhibited racial and gender biases due to training on datasets that underrepresented minorities and contained societal biases.
In 2016, researchers from Boston University and Microsoft discovered racist and sexist tendencies in word-embedding algorithms that found correlations among words by analyzing large bodies of text. The algorithms, trained on hundreds of thousands of articles from Google News, Wikipedia, and other online sources, associated 'computer programmer' with male pronouns and 'homemaker' with female ones. Around the same time, several other AI systems demonstrated similar biases: Google's Photos app tagged two black people as gorillas, an AI-judged beauty contest (Beauty.AI) selected 44 winners who were nearly all white despite submissions from over 100 countries including large groups from India and Africa, and face-analysis services from IBM and Microsoft often erred when analyzing pictures of women with dark skin. The Beauty.AI contest, created by Youth Laboratories and supported by Microsoft, used algorithms to assess beauty based on facial symmetry, wrinkles, and age appearance from over 6,000 submitted photos. The biases occurred because training datasets lacked diversity - for example, Beauty.AI's database had significantly more white people than people of color, and 75% of contest entrants were European and white. These incidents highlighted how AI systems can inherit and amplify existing societal biases present in their training data.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed