Procedural
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
"The third class encompasses procedural AI hazards. These pertain to issues arising from processes and actions made by individuals involved in the develop- ment process. Such hazards are not readily quantifiable and necessitate alter- native mitigation strategies. An example of such an AI hazard would be ”poor model design choices,” which could be expressed, for instance, through a devel- oper’s decision to select an unsuitable AI model for a given problem. Due to the challenges in quantifying and mitigating these issues, qualitative approaches must be employed. In the case of the aforementioned example, a potential strat- egy might involve requiring the AI developer to provide a documented rationale for their choice."(p. 7)
Other risks from Schnitzer2024 (24)
Inadequate specification of ODD
7.3 Lack of capability or robustnessInappropriate degree of automation
7.2 AI possessing dangerous capabilitiesInadequate planning of performance requirements
7.3 Lack of capability or robustnessInsufficient AI development documentation
7.4 Lack of transparency or interpretabilityInappropriate degree of transparency to end users
7.4 Lack of transparency or interpretabilityChoice of untrustworthy data source
7.0 AI System Safety, Failures & Limitations