BackManipulation
Manipulation
Risk Domain
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"The predictability of behaviour protocol in AI, particularly in some applications, can act an incentive to manipulate these systems."(p. 31)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Teixeira et al. (2022) (15)
Accountability
7.4 Lack of transparency or interpretabilityOtherOtherOther
Accuracy
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Moral
7.3 Lack of capability or robustnessOtherUnintentionalPost-deployment
Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Opacity
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPost-deployment
Power
6.1 Power centralization and unfair distribution of benefitsHumanIntentionalOther