BackType 5: Criminal weaponization
Type 5: Criminal weaponization
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
One or more criminal entities could create AI to intentionally inflict harms, such as for terrorism or combating law enforcement.(p. 3)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Critch & Russell (2023) (5)
Type 1: Diffusion of responsibility
6.5 Governance failureAI systemUnintentionalOther
Type 2: Bigger than expected
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Type 3: Worse than expected
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Type 4: Willful indifference
6.4 Competitive dynamicsHumanUnintentionalPost-deployment
Type 6: State Weaponization
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment