Skip to main content
BackLarge-Scale Persuasion and Harmful Manipulation Risks
Home/Risks/SAIL & Concordia AI (2025)/Large-Scale Persuasion and Harmful Manipulation Risks

Large-Scale Persuasion and Harmful Manipulation Risks

Frontier AI Risk Management Framework (v1.0)

SAIL & Concordia AI (2025)

Sub-category
Risk Domain

Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.

"AI systems can be gravely misused to distort public perception and compromise social stability through the generation of synthetic content (e.g., deepfakes, sophisticated fake news) and the strategic manipulation of digital platforms with large user bases to disseminate or precisely target misleading information or ideologies."(p. 6)

Supporting Evidence (1)

1.
"AI can facilitate large-scale commercial fraud, manipulate public opinion through hyper-personalized disinformation campaigns, or generate fabricated information to induce consumption or improperly influence public judgment. Advanced AI systems can create convincing deepfake videos, synthetic audio recordings, and tailored propaganda that exploit individual psychological profiles and behavioral patterns."(p. 6)

Other risks from SAIL & Concordia AI (2025) (36)