BackOther ethical risks
Sub-category
Risk Domain
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"Although we have discussed a number of common risks posed by ML systems, we acknowledge that there are many other ethical risks such as the potential for psychological manipulation, dehumanization, and exploitation of humans at scale."(p. 15)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Second-Order Risks
Other risks from Tan, Taeihagh & Baxter (2022) (17)
First-Order Risks
7.0 AI System Safety, Failures & LimitationsOtherOtherOther
First-Order Risks > Application
7.0 AI System Safety, Failures & LimitationsHumanIntentionalPost-deployment
First-Order Risks > Misapplication
7.3 Lack of capability or robustnessHumanIntentionalPost-deployment
First-Order Risks > Algorithm
7.3 Lack of capability or robustnessAI systemUnintentionalPre-deployment
First-Order Risks > Training & validation data
7.0 AI System Safety, Failures & LimitationsHumanOtherPre-deployment
First-Order Risks > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment