Making disinformation cheaper and more effective
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"LMs can be used to create synthetic media and ‘fake news’, and may reduce the cost of producing disinformation at scale (Buchanan et al., 2021). While some predict that it will be cheaper to hire humans to generate disinformation (Tamkin et al., 2021), it is possible that LM-assisted content generation may offer a cheaper way of generating diffuse disinformation at scale."(p. 25)
Supporting Evidence (3)
"LMs can be used to create content that promotes particular political views, and fuels polarisation campaigns or violent extremist views. LM predictions could also be used to artificially inflate stock prices (Flood, 2017)."(p. 25)
Example: "Disinformation campaigns to undermine or polarise public discourse A college student made interna- tional headlines by demonstrating that GPT-3 could be used to write compelling fake news."(p. 26)
Example: "Creating false ‘majority opinions’ For example, a US consultation on net neutrality in 2017 was over- whelmed by the high proportion of automated or bot-driven submissions to the Federal Communications Commission, undermining the public consultation process (Hitlin et al., 2017; James, 2021; Lapowsky, 2017)."(p. 26)
Part of Malicious Uses
Other risks from Weidinger et al. (2021) (26)
Discrimination, Exclusion and Toxicity
1.0 Discrimination & ToxicityDiscrimination, Exclusion and Toxicity > Social stereotypes and unfair discrmination
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Exclusionary norms
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Toxic language
1.2 Exposure to toxic contentDiscrimination, Exclusion and Toxicity > Lower performance for some languages and social groups
1.3 Unequal performance across groupsInformation Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive information