Skip to main content
Home/Risks/Hagendorff (2024)/Transparency - Explainability

Transparency - Explainability

Mapping the Ethics of Generative AI: A Comprehensive Scoping Review

Hagendorff (2024)

Category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

Being a multifaceted concept, the term 'transparency' is both used to refer to technical explainability as well as organizational openness. Regarding the former, papers underscore the need for mechanistic interpretability and for explaining internal mechanisms in generative models. On the organizational front, transparency relates to practices such as informing users about capabilities and shortcomings of models, as well as adhering to documentation and reporting requirements for data collection processes or risk evaluations.(p. 8)

Other risks from Hagendorff (2024) (16)