BackLack of transparency
Category
Risk Domain
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"The idea of a "black box" making decisions without any explanation, without offering insight in the process, has a couple of disadvantages: it may fail to gain the trust of its users and it may fail to meet regulatory standards such as the ability to audit."(p. 8)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Hogenhout (2021) (12)
Incompetence
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Loss of privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanIntentionalPost-deployment
Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Erosion of Society
3.2 Pollution of information ecosystem and loss of consensus realityAI systemUnintentionalPost-deployment
Deception
4.3 Fraud, scams, and targeted manipulationAI systemOtherPost-deployment