Skip to main content
Home/Risks/Weidinger et al. (2022)/Social stereotypes and unfair discrimination

Social stereotypes and unfair discrimination

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"The reproduction of harmful stereotypes is well-documented in models that represent natural language [32]. Large-scale LMs are trained on text sources, such as digitised books and text on the internet. As a result, the LMs learn demeaning language and stereotypes about groups who are frequently marginalised."(p. 216)

Supporting Evidence (1)

1.
"Downstream uses of LMs that encode these stereotypes can cause allocational harms when resources and opportunities are unfairly allocated between social groups; and rep- resentational harms including demeaning social groups (Barocas and Wallach in [22])."(p. 216)

Part of Risk area 1: Discrimination, Hate speech and Exclusion

Other risks from Weidinger et al. (2022) (25)