Skip to main content
Home/Risks/Zhang et al. (2022)/Adversarial attack

Adversarial attack

Towards risk-aware artificial intelligence and machine learning systems: An overview

Zhang et al. (2022)

Sub-category
Risk Domain

Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.

"Recent advances have shown that a deep learning model with high predictive accuracy frequently misbehaves on adversarial examples [57,58]. In particular, a small perturbation to an input image, which is imperceptible to humans, could fool a well-trained deep learning model into making completely different predictions [23]."(p. 5)

Supporting Evidence (1)

1.
"In general, adversarial attacks can be grouped into two classes: 1. Targeted adversarial attack: The goal of targeted adversarial attack is to make an AI/ML model classify an adversarial image with a true label of K as a target class T (T ∕= K ) through intentional design (i.e., data manipulation). 2. Untargeted adversarial attack: The objective of untargeted adversarial attack is to make an AI/ML model generate a prediction that is different from the true label without intended target"(p. 5)

Other risks from Zhang et al. (2022) (6)