Cognitive risks (Risks of usage in launching cognitive warfare)
AI Safety Governance Framework
National Technical Committee 260 on Cybersecurity (TC260) (2024)
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
"AI can be used to make and spread fake news, images, audio, and videos; propagate content of terrorism, extremism, and organized crimes; interfere in the internal affairs of other countries, social systems, and social order; and jeopardize the sovereignty of other countries."(p. 12)
Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)
Risks from models and algorithms (Risks of explainability)
7.4 Lack of transparency or interpretabilityRisks from models and algorithms (Risks of bias and discrimination)
1.1 Unfair discrimination and misrepresentationRisks from models and algorithms (Risks of robustness)
7.3 Lack of capability or robustnessRisks from models and algorithms (Risks of stealing and tampering)
2.2 AI system security vulnerabilities and attacksRisks from models and algorithms (Risks of unreliable output)
3.1 False or misleading informationRisks from models and algorithms (Risks of adversarial attack)
2.2 AI system security vulnerabilities and attacks