Skip to main content

Allocative Harms

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

Shelby et al. (2023)

Category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"These harms occur when a system withholds information, opportunities, or resources [22] from historically marginalized groups in domains that affect material well-being [146], such as housing [47], employment [201], social services [15, 201], finance [117], education [119], and healthcare [158]."(p. 729)

Other risks from Shelby et al. (2023) (24)