BackBiological Risks
Biological Risks
Risks of AI Scientists: Prioritizing Safeguarding Over Autonomy
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"Biological risks encompass the dangerous modification of pathogens and unethical manipulation of genetic material, potentially leading to unforeseen biohazardous outcomes."(p. 6)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (1)
1.
Examples: "Pathogen manipulation Unethical gene editing Biohazardous outcomes"(p. 4)
Other risks from Tang2025 (7)
Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harmHumanIntentionalPost-deployment
Radiological Risks
7.3 Lack of capability or robustnessOtherOtherPost-deployment
Physical (Mechanical ) Risks
7.3 Lack of capability or robustnessOtherUnintentionalPost-deployment
Information Science Risks
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationOtherOtherPost-deployment
Malicious and Direct
4.0 Malicious Actors & MisuseHumanIntentionalOther
Malicious and Indirect
4.0 Malicious Actors & MisuseOtherIntentionalOther