BackAbuse & Misuse
Abuse & Misuse
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"The potential for AI systems to be used maliciously or irresponsibly, including for creating deepfakes, automated cyber attacks, or invasive surveillance systems. Specifically denotes intentional use of AI for harm."(p. 23048)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Sherman & Eisenberg (2023) (8)
Compliance
6.5 Governance failureAI systemOtherPost-deployment
Environmental & Societal Impact
6.0 Socioeconomic & EnvironmentalOtherOtherPost-deployment
Explainability & Transparency
7.4 Lack of transparency or interpretabilityAI systemOtherOther
Fairness & Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Long-term & Existential Risk
7.1 AI pursuing its own goals in conflict with human goals or valuesOtherOtherPost-deployment
Performance & Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment