BackFraud
Sub-category
Risk Domain
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
"Facilitating fraud, cheating, forgery, and impersonation scams"(p. 30)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Malicious Use
Other risks from Weidinger et al. (2023) (26)
Representation & Toxicity Harms
1.0 Discrimination & ToxicityAI systemUnintentionalPost-deployment
Representation & Toxicity Harms > Unfair representation
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Representation & Toxicity Harms > Unfair capability distribution
1.3 Unequal performance across groupsAI systemUnintentionalPost-deployment
Representation & Toxicity Harms > Toxic content
1.2 Exposure to toxic contentAI systemUnintentionalPost-deployment
Misinformation Harms
3.0 MisinformationAI systemOtherPost-deployment
Misinformation Harms > Propagating misconceptions/ false beliefs
3.1 False or misleading informationAI systemOtherPost-deployment