BackMalicious Use of AI
Malicious Use of AI
Risk Domain
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Malicious utilization of AI has the potential to endanger digital security, physical security, and political security. International law enforcement entities grapple with a variety of risks linked to the Malevolent Utilization of AI.(p. 4)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Habbal et al. (2024) (6)
Bias and Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Privacy Invasion
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalPost-deployment
Society Manipulation
4.1 Disinformation, surveillance, and influence at scaleAI systemIntentionalPost-deployment
Deepfake Technology
4.1 Disinformation, surveillance, and influence at scaleHumanIntentionalPost-deployment
Lethal Autonomous Weapons Systems (LAWS)
4.2 Cyberattacks, weapon development or use, and mass harmAI systemIntentionalPost-deployment
Insufficient Security Measures
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment