Making disinformation cheaper and more effective
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"While some predict that it will remain cheaper to hire humans to generate disinformation [180], it is equally possible that LM- assisted content generation may offer a lower-cost way of creating disinformation at scale."(p. 219)
Supporting Evidence (3)
"Disinformation campaigns could be used to mislead the public, shape public opinion on a particu- lar topic, or to artificially inflate stock prices [56]."(p. 219)
"Disinformation could also be used to create false “majority opinions” by flooding sites with synthetic text, similar to bot-driven submissions that undermined a public consultation process in 2017 [74, 89, 111].."(p. 219)
"Large LMs can be used to generate synthetic content on arbitrary topics that is harder to detect, and indistinguishable from human-written fake news to human raters [203]. This suggests that LMs may reduce the cost of producing disinformation at scale [31].."(p. 219)
Part of Risk area 4: Malicious Uses
Other risks from Weidinger et al. (2022) (25)
Risk area 1: Discrimination, Hate speech and Exclusion
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Social stereotypes and unfair discrimination
1.1 Unfair discrimination and misrepresentationRisk area 1: Discrimination, Hate speech and Exclusion > Hate speech and offensive language
1.2 Exposure to toxic contentRisk area 1: Discrimination, Hate speech and Exclusion > Exclusionary norms
1.1 Unfair discrimination and misrepresentationRisk area 1: Discrimination, Hate speech and Exclusion > Lower performance for some languages and social groups
1.3 Unequal performance across groupsRisk area 2: Information Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive information