Representational Harms
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"beliefs about different social groups that reproduce unjust societal hierarchies"(p. 728)
Sub-categories (6)
Stereotyping social groups
Stereotyping in an algorithmic system refers to how the system’s outputs reflect “beliefs about the characteristics, attributes, and behaviors of members of certain groups....and about how and why certain attributes go together"
1.1 Unfair discrimination and misrepresentationDemeaning social groups
Demeaning of social groups to occur when they are when they are “cast as being lower status and less deserving of respect"... discourses, images, and language used to marginalize or oppress a social group... Controlling images include forms of human-animal confusion in image tagging systems
1.1 Unfair discrimination and misrepresentationErasing social groups
people, attributes, or artifacts associated with specific social groups are systematically absent or under-represented... Design choices [143] and training data [212] influence which people and experiences are legible to an algorithmic system
1.3 Unequal performance across groupsAlienating social groups
when an image tagging system does not acknowledge the relevance of someone’s membership in a specific social group to what is depicted in one or more images
1.1 Unfair discrimination and misrepresentationDenying people the opportunity to self-identify
complex and non-traditional ways in which humans are represented and classified automatically, and often at the cost of autonomy loss... such as categorizing someone who identifies as non-binary into a gendered category they do not belong ... undermines people’s ability to disclose aspects of their identity on their own terms
1.1 Unfair discrimination and misrepresentationReifying essentialist categories
algorithmic systems that reify essentialist social categories can be understood as when systems that classify a person’s membership in a social group based on narrow, socially constructed criteria that reinforce perceptions of human difference as inherent, static and seemingly natural... especially likely when ML models or human raters classify a person’s attributes – for instance, their gender, race, or sexual orientation – by making assumptions based on their physical appearance
1.1 Unfair discrimination and misrepresentationOther risks from Shelby et al. (2023) (24)
Allocative Harms
1.1 Unfair discrimination and misrepresentationAllocative Harms > Opportunity loss
1.1 Unfair discrimination and misrepresentationAllocative Harms > Economic loss
1.1 Unfair discrimination and misrepresentationQuality-of-Service Harms
1.3 Unequal performance across groupsQuality-of-Service Harms > Alienation
1.3 Unequal performance across groupsQuality-of-Service Harms > Increased labor
1.3 Unequal performance across groups