BackUnauthorized manipulation of AI
Unauthorized manipulation of AI
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"AI machines could be hacked and misused, e.g. manipulating an airport luggage screening system to smuggle weapons"(p. 690)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Meek et al. (2016) (17)
Unethical decision making
7.3 Lack of capability or robustnessAI systemIntentionalPost-deployment
Privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanIntentionalPost-deployment
Human dignity/respect
5.2 Loss of human agency and autonomyOtherOtherPost-deployment
Decision making transparency
7.4 Lack of transparency or interpretabilityAI systemOtherPost-deployment
Safety
7.3 Lack of capability or robustnessAI systemOtherPost-deployment
Law abiding
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment