Transparency and explainability
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"A recurring complaint among participants was a lack of knowledge about how AI systems made judgements. They emphasized the significance of making AI systems more visible and explainable so that people may have confidence in their outputs and hold them accountable for their activities. Because AI systems are typically opaque, making it difficult for users to understand the rationale behind their judgements, ethical concerns about AI, as well as issues of transparency and explainability, arise. This lack of understanding can generate suspicion and reluctance to adopt AI technology, as well as making it harder to hold AI systems accountable for their actions."(p. 10)
Other risks from Kumar & Singh (2023) (4)
Privacy and security
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationBias and fairness
1.1 Unfair discrimination and misrepresentationHuman–AI interaction
5.2 Loss of human agency and autonomyTrust and reliability
7.4 Lack of transparency or interpretability