Facilitating fraud, scam and targeted manipulation
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Anticipated risk: "LMs can potentially be used to increase the effectiveness of crimes."(p. 219)
Supporting Evidence (2)
Example: "Further, LMs may make email scams more effective by generating personalised and compelling text at scale, or by maintaining a conversation with a victim over multiple rounds of exchange.."(p. 219)
Example: "LM-generated content may also be fraudulently presented as a person’s own work, for example, to cheat on an exam."(p. 219)
Part of Risk area 4: Malicious Uses
Other risks from Weidinger et al. (2022) (25)
Risk area 1: Discrimination, Hate speech and Exclusion
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Social stereotypes and unfair discrimination
1.1 Unfair discrimination and misrepresentationRisk area 1: Discrimination, Hate speech and Exclusion > Hate speech and offensive language
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Exclusionary norms
1.1 Unfair discrimination and misrepresentationRisk area 1: Discrimination, Hate speech and Exclusion > Lower performance for some languages and social groups
1.3 Unequal performance across groupsRisk area 2: Information Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive information