Skip to main content
Home/Risks/Weidinger et al. (2022)/Making disinformation cheaper and more effective

Making disinformation cheaper and more effective

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.

"While some predict that it will remain cheaper to hire humans to generate disinformation [180], it is equally possible that LM- assisted content generation may offer a lower-cost way of creating disinformation at scale."(p. 219)

Supporting Evidence (3)

1.
"Disinformation campaigns could be used to mislead the public, shape public opinion on a particu- lar topic, or to artificially inflate stock prices [56]."(p. 219)
2.
"Disinformation could also be used to create false “majority opinions” by flooding sites with synthetic text, similar to bot-driven submissions that undermined a public consultation process in 2017 [74, 89, 111].."(p. 219)
3.
"Large LMs can be used to generate synthetic content on arbitrary topics that is harder to detect, and indistinguishable from human-written fake news to human raters [203]. This suggests that LMs may reduce the cost of producing disinformation at scale [31].."(p. 219)

Part of Risk area 4: Malicious Uses

Other risks from Weidinger et al. (2022) (25)