Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"Artificial intelligence comes with an intrinsic set of challenges that need to be considered when discussing trustworthiness, especially in the context of functional safety. AI models, especially those with higher complexities (such as neural networks), can exhibit specific weaknesses not found in other types of systems and must, therefore, be subjected to higher levels of scrutiny, especially when deployed in a safety-critical context"(p. 21)
Other risks from Steimers & Schneider (2022) (7)
Fairness
1.1 Unfair discrimination and misrepresentationPrivacy
2.0 Privacy & SecurityDegree of Automation and Control
7.1 AI pursuing its own goals in conflict with human goals or valuesComplexity of the Intended Task and Usage Environment
7.3 Lack of capability or robustnessDegree of Transparency and Explainability
7.4 Lack of transparency or interpretabilitySystem Hardware
7.3 Lack of capability or robustness