Skip to main content
BackPolitically motivated misuse
Home/Risks/Maham & Küspert (2023)/Politically motivated misuse

Politically motivated misuse

Governing General Purpose AI: A Comprehensive Map of Unreliability, Misuse and Systemic Risks

Maham & Küspert (2023)

Sub-category
Risk Domain

Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.

"General purpose AI models could exacerbate existing tactics for political destabilisation, such as disinformation campaigns, and surveillance efforts if misused for political motivations. The technological advancements in text and media generation of general purpose AI models could refine disinformation164 attempts to shape and polarise public opinion or influence important political events.165 The improved automated processing of text, audio, image, and video could be used for surveillance measures and exacerbate human right violations and repression of political oppositions."(p. 31)

Supporting Evidence (4)

1.
"General purpose AI models could increase the scale of disinformation campaigns by widening the group of actors and reducing the costs of creating persuasive content.167 With regard to text, first experiments with OpenAI’s GPT-3 showed human-level persuasiveness on political topics.168 Since its successor, GPT-4, has shown improved capabilities around a wide range of tasks, it can be expected to be more effective in political persuasion as well.169 Convincing content can be created with general purpose AI models to spread disinformation, damage reputations, and manipulate public opinion – alone, or in combination with increasingly realistic and believable “deepfakes”, a term used to describe images, videos, or audio files that were fabricated or manipulated by AI"(p. 31)
2.
"For example, in the past, a Russian troll-factory with a monthly budget exceeding one million dollars targeted the 2016 U.S. presidential election, spreading masses of Tweets about false news stories and “pro-Trump propaganda” online.172"(p. 31)
3.
"General purpose AI could not only make disinformation campaigns cheaper and more scalable, but also more effective, by generating increasingly persuasive content that is harder to detect. Integrated into downstream applications such as chatbots, general purpose AI can enable novel tactics, for example, one-on- one conversations with content that is highly personalised to its users. There is evidence that interactions like these can have a tangible influence on users’ views about controversial topics like the COVID-19 pandemic.173 When general purpose AI models show human-like traits, like empathy or emotional intelligence, 174 it can increase the trust users put into them and their output. This can, in turn, increase the chance that people more easily accept the information propagated by such models without questioning it.175 Further, users who interact with AI models that appear more like humans are more likely to share private information176, thereby enabling even more personalised attempts at persuasion."(p. 32)
4.
"The improved automated processing of text, audio, image, and video through general purpose AI models could also be misused for surveillance, analysing mass-collected data of people’s behaviour and beliefs, by lowering barriers for analysing such data. 178 Improved image, voice and video recognition can be used to surveil public spaces, and monitor and censor social media content more efficiently in real-time."(p. 32)

Part of Misuse Risks

Other risks from Maham & Küspert (2023) (10)