Skip to main content
BackRisks from AI systems (Risks of exploitation through defects and backdoors)
Home/Risks/National Technical Committee 260 on Cybersecurity (TC260) (2024)/Risks from AI systems (Risks of exploitation through defects and backdoors)

Risks from AI systems (Risks of exploitation through defects and backdoors)

AI Safety Governance Framework

National Technical Committee 260 on Cybersecurity (TC260) (2024)

Sub-category
Risk Domain

Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.

"The standardized API, feature libraries, toolkits used in the design, training, and verification stages of AI algorithms and models, development interfaces, and execution platforms may contain logical flaws and vulnerabilities. These weaknesses can be exploited, and in some cases, backdoors can be intentionally embedded, posing significant risks of being triggered and used for attacks."(p. 8)

Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)