Skip to main content
Home/Risks/Schnitzer2024/Lack of explainability

Lack of explainability

Category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

"The explainability of AI systems based on so-called black-box models is often limited. This opaqueness of AI systems can prevent developers from detecting shortcomings in the data or the model itself and decrease the performance and safety levels of the AI system."(p. 10)

Other risks from Schnitzer2024 (24)