BackData poisoning
Data poisoning
Risk Domain
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
"Data poisoning describes an attack in the form of an injection of malicious data into the training set. If not prevented, this attack leads the AI system to learn unintended behavior."(p. 9)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Schnitzer2024 (24)
Inadequate specification of ODD
7.3 Lack of capability or robustnessHumanUnintentionalPre-deployment
Inappropriate degree of automation
7.2 AI possessing dangerous capabilitiesAI systemUnintentionalPost-deployment
Inadequate planning of performance requirements
7.3 Lack of capability or robustnessHumanUnintentionalPre-deployment
Insufficient AI development documentation
7.4 Lack of transparency or interpretabilityHumanOtherPre-deployment
Inappropriate degree of transparency to end users
7.4 Lack of transparency or interpretabilityHumanOtherPre-deployment
Choice of untrustworthy data source
7.0 AI System Safety, Failures & LimitationsHumanUnintentionalPre-deployment