Skip to main content
BackDisinformation and Influence Operations
Home/Risks/DSIT (2023)/Disinformation and Influence Operations

Disinformation and Influence Operations

Capabilities and Risks from Frontier AI

DSIT (2023)

Sub-category
Risk Domain

Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.

"In addition to unintentional degradation of the information environment (discussed in the section on Societal Harms above), frontier AI can be misused to deliberately spread false information to create disruption, persuade people on political issues, or cause other forms of harm or damage."(p. 25)

Supporting Evidence (2)

1.
"Whilst improving media literacy is crucial, it is hard given that the quality of outputs from frontier AI is in many cases indistinguishable even to experts. This is a trend expected to increase with model size – in the GPT-3 paper, authors experiments found humans were better at distinguishing AI generated text for smaller models, but for larger models they could only tell the difference about 52% of the time, barely above random chance.247"(p. 25)
2.
"Frontier AI can generate hyper-targeted content with unprecedented scale and sophistication.242 This could lead to “personalised” disinformation, where bespoke messages are targeted at individuals rather than larger groups and are therefore more persuasive.243 Furthermore, one should expect that as AI-driven personalised disinformation campaigns unfold, these AIs will be able to learn from millions of interactions and become better at influencing and manipulating humans, possibly even becoming better than humans at this.244"(p. 25)

Part of Loss of control

Other risks from DSIT (2023) (12)