BackDual-Use Science
Dual-Use Science
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"LLM has science capabilities that can be used to cause harm (e.g., providing step-by-step instructions for conducting malicious experiments)"(p. 14)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Extreme Risks
Other risks from InfoComm Media Development Authority & AI Verify Foundation (2023) (22)
Safety & Trustworthiness
7.0 AI System Safety, Failures & LimitationsHumanIntentionalOther
Safety & Trustworthiness > Toxicity generation
1.2 Exposure to toxic contentAI systemOtherOther
Safety & Trustworthiness > Bias
1.1 Unfair discrimination and misrepresentationAI systemOtherOther
Safety & Trustworthiness > Machine ethics
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Psychological traits
7.3 Lack of capability or robustnessAI systemOtherOther
Safety & Trustworthiness > Robustness
7.3 Lack of capability or robustnessAI systemUnintentionalOther