Skip to main content
Home/Risks/National Technical Committee 260 on Cybersecurity (TC260) (2024)/Risks from models and algorithms (Risks of robustness)

Risks from models and algorithms (Risks of robustness)

AI Safety Governance Framework

National Technical Committee 260 on Cybersecurity (TC260) (2024)

Sub-category
Risk Domain

AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.

"As deep neural networks are normally non-linear and large in size, AI systems are susceptible to complex and changing operational environments or malicious interference and inductions, possibly leading to various problems like reduced performance and decision-making errors."(p. 6)

Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)