Skip to main content
Home/Risks/Weidinger et al. (2021)/Facilitating fraud, scames and more targeted manipulation

Facilitating fraud, scames and more targeted manipulation

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

"LM prediction can potentially be used to increase the effectiveness of crimes such as email scams, which can cause financial and psychological harm. While LMs may not reduce the cost of sending a scam email - the cost of sending mass emails is already low - they may make such scams more effective by generating more personalised and compelling text at scale, or by maintaining a conversation with a victim over multiple rounds of exchange."(p. 26)

Supporting Evidence (2)

1.
"Simulating a person’s writing style or speech may also be used to enable more targeted manipulation at scale. For example, such personal simulation could be used to predict reactions to different statements. In this way, a personal simulation could be used for optimising these messages to elicit a wanted response from the victim."(p. 27)
2.
"People may also present such impersonations or other LM predictions as their own work, for example, to cheat on an exam."(p. 27)

Part of Malicious Uses

Other risks from Weidinger et al. (2021) (26)