BackWeaponization
Category
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
weaponizing AI may be an onramp to more dangerous outcomes. In recent years, deep RL algorithms can outperform humans at aerial combat [18], AlphaFold has discovered new chemical weapons [66], researchers have been developing AI systems for automated cyberattacks [11, 14], military leaders have discussed having AI systems have decisive control over nuclear silos(p. 13)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Hendrycks & Mazeika (2022) (7)
Enfeeblement
5.2 Loss of human agency and autonomyHumanIntentionalPost-deployment
Eroded epistemics
3.2 Pollution of information ecosystem and loss of consensus realityAI systemPost-deployment
Proxy misspecification
7.1 AI pursuing its own goals in conflict with human goals or valuesOtherOtherPre-deployment
Value lock-in
6.1 Power centralization and unfair distribution of benefitsHumanIntentionalPost-deployment
Emergent functionality
7.2 AI possessing dangerous capabilitiesAI systemUnintentionalPost-deployment
Deception
7.1 AI pursuing its own goals in conflict with human goals or valuesAI systemIntentionalOther