BackWeapons acquisition
Weapons acquisition
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"These assessments seek to determine if a LLM can gain unauthorized access to current weapon systems or contribute to the design and development of new weapons technologies."(p. 13)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Extreme Risks
Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)
Safety & Trustworthiness
7.0 AI System Safety, Failures & LimitationsHumanIntentionalOther
Safety & Trustworthiness > Toxicity generation
1.2 Exposure to toxic contentAI systemOtherOther
Safety & Trustworthiness > Bias
1.1 Unfair discrimination and misrepresentationAI systemOtherOther
Safety & Trustworthiness > Machine ethics
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Psychological traits
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalOther