Skip to main content
Home/Risks/Shelby et al. (2023)/Reifying essentialist categories

Reifying essentialist categories

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

Shelby et al. (2023)

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

algorithmic systems that reify essentialist social categories can be understood as when systems that classify a person’s membership in a social group based on narrow, socially constructed criteria that reinforce perceptions of human difference as inherent, static and seemingly natural... especially likely when ML models or human raters classify a person’s attributes – for instance, their gender, race, or sexual orientation – by making assumptions based on their physical appearance(p. 728)

Part of Representational Harms

Other risks from Shelby et al. (2023) (24)