BackAdversarial attack
Adversarial attack
Risk Domain
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"Recent advances have shown that a deep learning model with high predictive accuracy frequently misbehaves on adversarial examples [57,58]. In particular, a small perturbation to an input image, which is imperceptible to humans, could fool a well-trained deep learning model into making completely different predictions [23]."(p. 5)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (1)
1.
"In general, adversarial attacks can be grouped into two classes: 1. Targeted adversarial attack: The goal of targeted adversarial attack is to make an AI/ML model classify an adversarial image with a true label of K as a target class T (T ∕= K ) through intentional design (i.e., data manipulation). 2. Untargeted adversarial attack: The objective of untargeted adversarial attack is to make an AI/ML model generate a prediction that is different from the true label without intended target"(p. 5)
Other risks from Zhang et al. (2022) (6)
Data bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Dataset shift
7.3 Lack of capability or robustnessAI systemUnintentionalOther
Out-of-domain data
7.3 Lack of capability or robustnessAI systemUnintentionalOther
Model bias
1.1 Unfair discrimination and misrepresentationOtherUnintentionalPre-deployment
Model misspecification
7.3 Lack of capability or robustnessAI systemUnintentionalOther
Model prediction uncertainty
7.3 Lack of capability or robustnessAI systemUnintentionalOther