Quality-of-Service Harms
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
"These harms occur when algorithmic systems disproportionately underperform for certain groups of people along social categories of difference such as disability, ethnicity, gender identity, and race."(p. 730)
Sub-categories (3)
Alienation
Alienation is the specific self-estrangement experienced at the time of technology use, typically surfaced through interaction with systems that under-perform for marginalized individuals
1.3 Unequal performance across groupsIncreased labor
increased burden (e.g., time spent) or effort required by members of certain social groups to make systems or products work as well for them as others
1.3 Unequal performance across groupsService/benefit loss
degraded or total loss of benefits of using algorithmic systems with inequitable system performance based on identity
1.3 Unequal performance across groupsOther risks from Shelby et al. (2023) (24)
Representational Harms
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Stereotyping social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Demeaning social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Erasing social groups
1.3 Unequal performance across groupsRepresentational Harms > Alienating social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Denying people the opportunity to self-identify
1.1 Unfair discrimination and misrepresentation