Interpersonal Harms
Delegating by humans of key decisions to AI systems, or AI systems that make decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory, or becoming cognitively enfeebled.
Interpersonal harms capture instances when algorithmic systems adversely shape relations between people or communities.(p. 730)
Sub-categories (4)
Loss of agency/control
Loss of agency occurs when the use [123, 137] or abuse [142] of algorithmic systems reduces autonomy. One dimension of agency loss is algorithmic profiling [138], through which people are subject to social sorting and discriminatory outcomes to access basic services... presentation of content may lead to “algorithmically informed identity change. . . including [promotion of] harmful person identities (e.g., interests in white supremacy, disordered eating, etc.).” Similarly, for content creators, desire to maintain visibility or prevent shadow banning, may lead to increased conforming of content
5.2 Loss of human agency and autonomyTechnology-facilitated violence
Technology-facilitated violence occurs when algorithmic features enable use of a system for harassment and violence [2, 16, 44, 80, 108], including creation of non-consensual sexual imagery in generative AI... other facets of technology-facilitated violence, include doxxing [79], trolling [14], cyberstalking [14], cyberbullying [14, 98, 204], monitoring and control [44], and online harassment and intimidation [98, 192, 199, 226], under the broader banner of online toxicity
4.3 Fraud, scams, and targeted manipulationDiminished health & well-being
algorithmic behavioral exploitation [18, 209], emotional manipulation [202] whereby algorithmic designs exploit user behavior, safety failures involving algorithms (e.g., collisions) [67], and when systems make incorrect health inferences
5.1 Overreliance and unsafe usePrivacy violations
Privacy violation occurs when algorithmic systems diminish privacy, such as enabling the undesirable flow of private information [180], instilling the feeling of being watched or surveilled [181], and the collection of data without explicit and informed consent... privacy violations may arise from algorithmic systems making predictive inference beyond what users openly disclose [222] or when data collected and algorithmic inferences made about people in one context is applied to another without the person’s knowledge or consent through big data flows
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationOther risks from Shelby et al. (2023) (24)
Representational Harms
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Stereotyping social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Demeaning social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Erasing social groups
1.3 Unequal performance across groupsRepresentational Harms > Alienating social groups
1.1 Unfair discrimination and misrepresentationRepresentational Harms > Denying people the opportunity to self-identify
1.1 Unfair discrimination and misrepresentation