BackLack of robustness
Lack of robustness
Risk Domain
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"Robustness characterizes the resilience of an AI system’s output against minor changes in the input domain. A great variation in an AI system’s response to small input changes indicates unreliable outputs."(p. 10)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Schnitzer2024 (24)
Inadequate specification of ODD
7.3 Lack of capability or robustnessHumanUnintentionalPre-deployment
Inappropriate degree of automation
7.2 AI possessing dangerous capabilitiesAI systemUnintentionalPost-deployment
Inadequate planning of performance requirements
7.3 Lack of capability or robustnessHumanUnintentionalPre-deployment
Insufficient AI development documentation
7.4 Lack of transparency or interpretabilityHumanOtherPre-deployment
Inappropriate degree of transparency to end users
7.4 Lack of transparency or interpretabilityHumanOtherPre-deployment
Choice of untrustworthy data source
7.0 AI System Safety, Failures & LimitationsHumanUnintentionalPre-deployment