Skip to main content
BackInformation Manipulation

Information Manipulation

Generating Harms - Generative AI's impact and paths forwards

Electronic Privacy Information Centre (2023)

Category
Risk Domain

Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.

"generative AI tools can and will be used to propagate content that is false, misleading, biased, inflammatory, or dangerous. As generative AI tools grow more sophisticated, it will be quicker, cheaper, and easier to produce this content—and existing harmful content can serve as the foundation to produce more"(p. 2)

Sub-categories (5)

Scams

"Bad actors can also use generative AI tools to produce adaptable content designed to support a campaign, political agenda, or hateful position and spread that information quickly and inexpensively across many platforms. This rapid spread of false or misleading content—AI-facilitated disinformation—can also create a cyclical effect for generative AI: when a high volume of disinformation is pumped into the digital ecosystem and more generative systems are trained on that information via reinforcement learning methods, for example, false or misleading inputs can create increasingly incorrect outputs."

4.3 Fraud, scams, and targeted manipulation
HumanIntentionalPost-deployment

Disinformation

"Bad actors can also use generative AI tools to produce adaptable content designed to support a campaign, political agenda, or hateful position and spread that information quickly and inexpensively across many platforms."

4.1 Disinformation, surveillance, and influence at scale
HumanIntentionalPost-deployment

Misinformation

"The phenomenon of inaccurate outputs by text-generating large language models like Bard or ChatGPT has already been widely documented. Even without the intent to lie or mislead, these generative AI tools can produce harmful misinformation. The harm is exacerbated by the polished and typically well-written style that AI generated text follows and the inclusion among true facts, which can give falsehoods a veneer of legitimacy. As reported in the Washington Post, for example, a law professor was included on an AI-generated “list of legal scholars who had sexually harassed someone,” even when no such allegation existed.10"

3.1 False or misleading information
AI systemUnintentionalPost-deployment

Security

"Though chatbots cannot (yet) develop their own novel malware from scratch, hackers could soon potentially use the coding abilities of large language models like ChatGPT to create malware that can then be minutely adjusted for maximum reach and effect, essentially allowing more novice hackers to become a serious security risk"

4.2 Cyberattacks, weapon development or use, and mass harm
HumanIntentionalPost-deployment

Clickbait and feeding the surveillance advertising ecosystem

"Beyond misinformation and disinformation, generative AI can be used to create clickbait headlines and articles, which manipulate how users navigate the internet and applications. For example, generative AI is being used to create full articles, regardless of their veracity, grammar, or lack of common sense, to drive search engine optimization and create more webpages that users will click on. These mechanisms attempt to maximize clicks and engagement at the truth’s expense, degrading users’ experiences in the process. Generative AI continues to feed this harmful cycle by spreading misinformation at faster rates, creating headlines that maximize views and undermine consumer autonomy."

3.2 Pollution of information ecosystem and loss of consensus reality
OtherOtherOther

Other risks from Electronic Privacy Information Centre (2023) (21)