BackModel misspecification
Model misspecification
Risk Domain
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"Models that are misspecified are known to give rise to inaccurate parameter estimations, inconsistent error terms, and erroneous predictions. All these factors put together will lead to poor prediction performance on unseen data and biased consequences when making decisions [68]."(p. 6)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Zhang et al. (2022) (6)
Data bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Dataset shift
7.3 Lack of capability or robustnessAI systemUnintentionalOther
Out-of-domain data
7.3 Lack of capability or robustnessAI systemUnintentionalOther
Adversarial attack
2.2 AI system security vulnerabilities and attacksHumanIntentionalOther
Model bias
1.1 Unfair discrimination and misrepresentationOtherUnintentionalPre-deployment
Model prediction uncertainty
7.3 Lack of capability or robustnessAI systemUnintentionalOther