Transparency - Explainability
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
Being a multifaceted concept, the term 'transparency' is both used to refer to technical explainability as well as organizational openness. Regarding the former, papers underscore the need for mechanistic interpretability and for explaining internal mechanisms in generative models. On the organizational front, transparency relates to practices such as informing users about capabilities and shortcomings of models, as well as adhering to documentation and reporting requirements for data collection processes or risk evaluations.(p. 8)
Human
Due to a decision or action made by humans
AI system
Due to a decision or action made by an AI system
Other
Due to some other reason or is ambiguous
Not coded
Intentional
Due to an expected outcome from pursuing a goal
Unintentional
Due to an unexpected outcome from pursuing a goal
Other
Without clearly specifying the intentionality
Not coded
Pre-deployment
Occurring before the AI is deployed
Post-deployment
Occurring after the AI model has been trained and deployed
Other
Without a clearly specified time of occurrence
Not coded
Other risks from Hagendorff (2024) (16)
Fairness - Bias
1.1 Unfair discrimination and misrepresentationSafety
7.1 AI pursuing its own goals in conflict with human goals or valuesHarmful Content - Toxicity
1.2 Exposure to toxic contentHallucinations
3.1 False or misleading informationPrivacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationInteraction risks
5.1 Overreliance and unsafe use