Robustness and Reliability
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
The robustness of an AI-based model refers to the stability of the model performance after abnormal changes in the input data... The cause of this change may be a malicious attacker, environmental noise, or a crash of other components of an AI-based system... This problem may be challenging in HLI-based agents because weak robustness may have appeared in unreliable machine learning models, and hence an HLI with this drawback is error-prone in practice.
Other risks from Saghiri et al. (2022) (15)
Energy Consumption
6.6 Environmental harmData Issues
1.1 Unfair discrimination and misrepresentationCheating and Deception
7.2 AI possessing dangerous capabilitiesSecurity
2.2 AI system security vulnerabilities and attacksPrivacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationFairness
1.3 Unequal performance across groups