BackSecurity
Security
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"Implications of the weaponization of AI for defence (the embeddedness of AI-based capabilities across the land, air, naval and space domains may affect combined arms operations)."(p. 31)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Teixeira et al. (2022) (15)
Accountability
7.4 Lack of transparency or interpretabilityOtherOtherOther
Manipulation
4.1 Disinformation, surveillance, and influence at scaleAI systemIntentionalPost-deployment
Accuracy
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Moral
7.3 Lack of capability or robustnessOtherUnintentionalPost-deployment
Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Opacity
7.4 Lack of transparency or interpretabilityAI systemUnintentionalPost-deployment