BackManipulation
Category
Risk Domain
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"The 2016 scandal involving Cambridge Analytica is the most infamous example where people's data was crawled from Facebook and analytics were then provided to target these people with manipulative content for political purposes.While it may not have been AI per se, it is based on similar data and it is easy to see how AI would make this more effective"(p. 9)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Hogenhout (2021) (12)
Incompetence
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Loss of privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanIntentionalPost-deployment
Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Erosion of Society
3.2 Pollution of information ecosystem and loss of consensus realityAI systemUnintentionalPost-deployment
Lack of transparency
7.4 Lack of transparency or interpretabilityAI systemUnintentionalOther