Explainability
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"A recurrent concern about AI algorithms is the lack of explainability for the model, which means information about how the algorithm arrives at its results is deficient (Deeks, 2019). Specifically, for generative AI models, there is no transparency to the reasoning of how the model arrives at the results (Dwivedi et al., 2023). The lack of transparency raises several issues. First, it might be difficult for users to interpret and understand the output (Dwivedi et al., 2023). It would also be difficult for users to discover potential mistakes in the output (Rudin, 2019). Further, when the interpretation and evaluation of the output are inaccessible, users may have problems trusting the system and their responses or recommendations (Burrell, 2016). Additionally, from the perspective of law and regulations, it would be hard for the regulatory body to judge whether the generative AI system is potentially unfair or biased (Rieder & Simon, 2017)."(p. 289)
Part of Technology concerns
Other risks from Nah et al. (2023) (17)
Technology concerns
7.3 Lack of capability or robustnessTechnology concerns > Hallucination
3.1 False or misleading informationTechnology concerns > Quality of training data
7.3 Lack of capability or robustnessTechnology concerns > Authenticity
6.3 Economic and cultural devaluation of human effortTechnology concerns > Prompt engineering
7.4 Lack of transparency or interpretabilityRegulations and policy challenges
6.5 Governance failure