Technical
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"Technical AI hazards are the root causes of technical deficiencies in the AI system. An example of such an AI hazard is overfitting, which describes a model’s excessive adaptation to the training dataset. Quantitative methods to assess (metrics) and treat (mitigation means) exist for technical AI hazards, which might be performed automatically. In case of overfitting, metrics are based on the comparison of performance between the training and validation datasets, and mitigation means may include regularization techniques, among others."(p. 7)
Other risks from Schnitzer2024 (24)
Inadequate specification of ODD
7.3 Lack of capability or robustnessInappropriate degree of automation
7.2 AI possessing dangerous capabilitiesInadequate planning of performance requirements
7.3 Lack of capability or robustnessInsufficient AI development documentation
7.4 Lack of transparency or interpretabilityInappropriate degree of transparency to end users
7.4 Lack of transparency or interpretabilityChoice of untrustworthy data source
7.0 AI System Safety, Failures & Limitations