BackSystem Hardware
Category
Risk Domain
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
""Faults in the hardware can violate the correct execution of any algorithm by violating its control flow. Hardware faults can also cause memory-based errors and interfere with data inputs, such as sensor signals, thereby causing erroneous results, or they can violate the results in a direct way through damaged outputs."(p. 22)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Steimers & Schneider (2022) (7)
Fairness
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Privacy
2.0 Privacy & SecurityAI systemOtherOther
Degree of Automation and Control
7.1 AI pursuing its own goals in conflict with human goals or valuesAI systemOtherPost-deployment
Complexity of the Intended Task and Usage Environment
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Degree of Transparency and Explainability
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPost-deployment
Security
2.2 AI system security vulnerabilities and attacksOtherOtherPost-deployment