Social stereotypes and unfair discrimination
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"The reproduction of harmful stereotypes is well-documented in models that represent natural language [32]. Large-scale LMs are trained on text sources, such as digitised books and text on the internet. As a result, the LMs learn demeaning language and stereotypes about groups who are frequently marginalised."(p. 216)
Supporting Evidence (1)
"Downstream uses of LMs that encode these stereotypes can cause allocational harms when resources and opportunities are unfairly allocated between social groups; and rep- resentational harms including demeaning social groups (Barocas and Wallach in [22])."(p. 216)
Part of Risk area 1: Discrimination, Hate speech and Exclusion
Other risks from Weidinger et al. (2022) (25)
Risk area 1: Discrimination, Hate speech and Exclusion
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Hate speech and offensive language
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Exclusionary norms
1.1 Unfair discrimination and misrepresentationRisk area 1: Discrimination, Hate speech and Exclusion > Lower performance for some languages and social groups
1.3 Unequal performance across groupsRisk area 2: Information Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationRisk area 2: Information Hazards > Compromising privacy by leaking sensitive information
2.1 Compromise of privacy by leaking or correctly inferring sensitive information