Skip to main content
BackTransparency and explainability
Home/Risks/Kumar & Singh (2023)/Transparency and explainability

Transparency and explainability

Ethical Issues in the Development of Artificial Intelligence: Recognizing the Risks

Kumar & Singh (2023)

Category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

"A recurring complaint among participants was a lack of knowledge about how AI systems made judgements. They emphasized the significance of making AI systems more visible and explainable so that people may have confidence in their outputs and hold them accountable for their activities. Because AI systems are typically opaque, making it difficult for users to understand the rationale behind their judgements, ethical concerns about AI, as well as issues of transparency and explainability, arise. This lack of understanding can generate suspicion and reluctance to adopt AI technology, as well as making it harder to hold AI systems accountable for their actions."(p. 10)

Other risks from Kumar & Singh (2023) (4)