Skip to main content
BackRisks from models and algorithms (Risks of explainability)
Home/Risks/National Technical Committee 260 on Cybersecurity (TC260) (2024)/Risks from models and algorithms (Risks of explainability)

Risks from models and algorithms (Risks of explainability)

AI Safety Governance Framework

National Technical Committee 260 on Cybersecurity (TC260) (2024)

Sub-category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

"AI algorithms, represented by deep learning, have complex internal workings. Their black-box or grey-box inference process results in unpredictable and untraceable outputs, making it challenging to quickly rectify them or trace their origins for accountability should any anomalies arise."(p. 6)

Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)