BackStereotyping social groups
Stereotyping social groups
Risk Domain
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
Stereotyping in an algorithmic system refers to how the system’s outputs reflect “beliefs about the characteristics, attributes, and behaviors of members of certain groups....and about how and why certain attributes go together"(p. 728)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Representational Harms
Other risks from Shelby et al. (2023) (24)
Representational Harms
1.1 Unfair discrimination and misrepresentationOtherUnintentionalPost-deployment
Representational Harms > Demeaning social groups
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Representational Harms > Erasing social groups
1.3 Unequal performance across groupsHumanUnintentionalOther
Representational Harms > Alienating social groups
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Representational Harms > Denying people the opportunity to self-identify
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Representational Harms > Reifying essentialist categories
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment