Skip to main content
BackDegree of Transparency and Explainability
Home/Risks/Steimers & Schneider (2022)/Degree of Transparency and Explainability

Degree of Transparency and Explainability

Sources of Risk of AI Systems

Steimers & Schneider (2022)

Category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

"Transparency is the characteristic of a system that describes the degree to which appropriate information about the system is communicated to relevant stakeholders, whereas explainability describes the property of an AI system to express important factors influencing the results of the AI system in a way that is understandable for humans....Information about the model underlying the decision-making process is relevant for transparency. Systems with a low degree of transparency can pose risks in terms of their fairness, security and accountability. "(p. 19)

Other risks from Steimers & Schneider (2022) (7)