Real-world risks (Risks of using AI in illegal and criminal activities)
AI Safety Governance Framework
National Technical Committee 260 on Cybersecurity (TC260) (2024)
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
"AI can be used in traditional illegal or criminal activities related to terrorism, violence, gambling, and drugs, such as teaching criminal techniques, concealing illicit acts, and creating tools for illegal and criminal activities."(p. 11)
Other risks from National Technical Committee 260 on Cybersecurity (TC260) (2024) (25)
Risks from models and algorithms (Risks of explainability)
7.4 Lack of transparency or interpretabilityRisks from models and algorithms (Risks of bias and discrimination)
1.1 Unfair discrimination and misrepresentationRisks from models and algorithms (Risks of robustness)
7.3 Lack of capability or robustnessRisks from models and algorithms (Risks of stealing and tampering)
2.2 AI system security vulnerabilities and attacksRisks from models and algorithms (Risks of unreliable output)
3.1 False or misleading informationRisks from models and algorithms (Risks of adversarial attack)
2.2 AI system security vulnerabilities and attacks