Skip to main content
BackRisks from models and algorithms (Risks of stealing and tampering)
Home/Risks/National Technical Committee 260 on Cybersecurity (TC260) (2024)/Risks from models and algorithms (Risks of stealing and tampering)

Risks from models and algorithms (Risks of stealing and tampering)

AI Safety Governance Framework

National Technical Committee 260 on Cybersecurity (TC260) (2024)

Sub-category
Risk Domain

Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.

"Core algorithm information, including parameters, structures, and functions, faces risks of inversion attacks, stealing, modification, and even backdoor injection, which can lead to infringement of intellectual property rights (IPR) and leakage of business secrets. It can also lead to unreliable inference, wrong decision output, and even operational failures."(p. 7)

Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)