BackPersuasion and manipulation
Home/Risks/InfoComm Media Development Authority & AI Verify Foundation (2023)/Persuasion and manipulation
Home/Risks/InfoComm Media Development Authority & AI Verify Foundation (2023)/Persuasion and manipulation
Persuasion and manipulation
Risk Domain
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"These evaluations seek to ascertain the effectiveness of a LLM in shaping people's beliefs, propagating specific viewpoints, and convincing individuals to undertake activities they might otherwise avoid."(p. 14)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Extreme Risks
Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)
Safety & Trustworthiness
7.0 AI System Safety, Failures & LimitationsHumanIntentionalOther
Safety & Trustworthiness > Toxicity generation
1.2 Exposure to toxic contentAI systemOtherOther
Safety & Trustworthiness > Bias
1.1 Unfair discrimination and misrepresentationAI systemOtherOther
Safety & Trustworthiness > Machine ethics
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Psychological traits
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalOther