Skip to main content
Home/Risks/Shelby et al. (2023)/Technology-facilitated violence

Technology-facilitated violence

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

Shelby et al. (2023)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

Technology-facilitated violence occurs when algorithmic features enable use of a system for harassment and violence [2, 16, 44, 80, 108], including creation of non-consensual sexual imagery in generative AI... other facets of technology-facilitated violence, include doxxing [79], trolling [14], cyberstalking [14], cyberbullying [14, 98, 204], monitoring and control [44], and online harassment and intimidation [98, 192, 199, 226], under the broader banner of online toxicity(p. 732)

Part of Interpersonal Harms

Other risks from Shelby et al. (2023) (24)