Skip to main content
Home/Risks/Weidinger et al. (2022)/Facilitating fraud, scam and targeted manipulation

Facilitating fraud, scam and targeted manipulation

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

Anticipated risk: "LMs can potentially be used to increase the effectiveness of crimes."(p. 219)

Supporting Evidence (2)

1.
Example: "Further, LMs may make email scams more effective by generating personalised and compelling text at scale, or by maintaining a conversation with a victim over multiple rounds of exchange.."(p. 219)
2.
Example: "LM-generated content may also be fraudulently presented as a person’s own work, for example, to cheat on an exam."(p. 219)

Part of Risk area 4: Malicious Uses

Other risks from Weidinger et al. (2022) (25)