Reifying essentialist categories
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
algorithmic systems that reify essentialist social categories can be understood as when systems that classify a person’s membership in a social group based on narrow, socially constructed criteria that reinforce perceptions of human difference as inherent, static and seemingly natural... especially likely when ML models or human raters classify a person’s attributes – for instance, their gender, race, or sexual orientation – by making assumptions based on their physical appearance(p. 728)
Part of Representational Harms
Other risks from Shelby et al. (2023) (24)
Representational Harms
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Stereotyping social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Demeaning social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Erasing social groups
1.3 Unequal performance across groupsRepresentational Harms > Alienating social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Denying people the opportunity to self-identify
1.1 Unfair discrimination and misrepresentation