Risks from models and algorithms (Risks of explainability)
AI Safety Governance Framework
National Technical Committee 260 on Cybersecurity (TC260) (2024)
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
"AI algorithms, represented by deep learning, have complex internal workings. Their black-box or grey-box inference process results in unpredictable and untraceable outputs, making it challenging to quickly rectify them or trace their origins for accountability should any anomalies arise."(p. 6)
Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)
Risks from models and algorithms (Risks of bias and discrimination)
1.1 Unfair discrimination and misrepresentationRisks from models and algorithms (Risks of robustness)
7.3 Lack of capability or robustnessRisks from models and algorithms (Risks of stealing and tampering)
2.2 AI system security vulnerabilities and attacksRisks from models and algorithms (Risks of unreliable output)
3.1 False or misleading informationRisks from models and algorithms (Risks of adversarial attack)
2.2 AI system security vulnerabilities and attacksRisks from data (Risks of illegal collection and use of data)
2.1 Compromise of privacy by leaking or correctly inferring sensitive information