BackPropaganda
Propaganda
Risk Domain
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
LLMs can be leveraged, by malicious users, to proactively generate propaganda information that can facilitate the spreading of a target(p. 19)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Resistance to Misuse
Other risks from Liu et al. (2024) (34)
Reliability
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Misinformation
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Hallucination
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Inconsistency
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Reliability > Miscalibration
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Reliability > Sychopancy
3.1 False or misleading informationAI systemIntentionalPost-deployment