Skip to main content
BackMaking disinformation cheaper and more effective
Home/Risks/Weidinger et al. (2021)/Making disinformation cheaper and more effective

Making disinformation cheaper and more effective

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Sub-category
Risk Domain

Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.

"LMs can be used to create synthetic media and ‘fake news’, and may reduce the cost of producing disinformation at scale (Buchanan et al., 2021). While some predict that it will be cheaper to hire humans to generate disinformation (Tamkin et al., 2021), it is possible that LM-assisted content generation may offer a cheaper way of generating diffuse disinformation at scale."(p. 25)

Supporting Evidence (3)

1.
"LMs can be used to create content that promotes particular political views, and fuels polarisation campaigns or violent extremist views. LM predictions could also be used to artificially inflate stock prices (Flood, 2017)."(p. 25)
2.
Example: "Disinformation campaigns to undermine or polarise public discourse A college student made interna- tional headlines by demonstrating that GPT-3 could be used to write compelling fake news."(p. 26)
3.
Example: "Creating false ‘majority opinions’ For example, a US consultation on net neutrality in 2017 was over- whelmed by the high proportion of automated or bot-driven submissions to the Federal Communications Commission, undermining the public consultation process (Hitlin et al., 2017; James, 2021; Lapowsky, 2017)."(p. 26)

Part of Malicious Uses

Other risks from Weidinger et al. (2021) (26)