BackLarge-Scale Persuasion and Harmful Manipulation Risks
Large-Scale Persuasion and Harmful Manipulation Risks
Risk Domain
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"AI systems can be gravely misused to distort public perception and compromise social stability through the generation of synthetic content (e.g., deepfakes, sophisticated fake news) and the strategic manipulation of digital platforms with large user bases to disseminate or precisely target misleading information or ideologies."(p. 6)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (1)
1.
"AI can facilitate large-scale commercial fraud, manipulate public opinion through hyper-personalized disinformation campaigns, or generate fabricated information to induce consumption or improperly influence public judgment. Advanced AI systems can create convincing deepfake videos, synthetic audio recordings, and tailored propaganda that exploit individual psychological profiles and behavioral patterns."(p. 6)
Other risks from SAIL & Concordia AI (2025) (36)
Misuse Risks
4.0 Malicious Actors & MisuseHumanIntentionalPost-deployment
Loss of Control Risks
5.2 Loss of human agency and autonomyAI systemOther
Accident Risks
7.3 Lack of capability or robustnessHumanUnintentionalPost-deployment
Model Capabilities
7.2 AI possessing dangerous capabilitiesNot codedNot codedNot coded
Cyber Offense Risks
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Biological and Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment