BackProtection
Protection
Risk Domain
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"'Gaps' that arise across the development process where normal conditions for a complete specification of intended functionality and moral responsibility are not present."(p. 31)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Teixeira et al. (2022) (15)
Accountability
7.4 Lack of transparency or interpretabilityOtherOtherOther
Manipulation
4.1 Disinformation, surveillance, and influence at scaleAI systemIntentionalPost-deployment
Accuracy
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Moral
7.3 Lack of capability or robustnessOtherUnintentionalPost-deployment
Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Opacity
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPost-deployment