Skip to main content
Home/Risks/Maham & Küspert (2023)/Biosecurity Threats

Biosecurity Threats

Governing General Purpose AI: A Comprehensive Map of Unreliability, Misuse and Systemic Risks

Maham & Küspert (2023)

Sub-category
Risk Domain

Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.

"The potential misuse of general purpose AI models also extends to biosecurity threats. Biological weapons are generally understood as biological toxins or infectious agents such as viruses that are intentionally released to cause disease and death.157 General purpose AI models could facilitate the production of biological weapons, by reducing barriers through access to critical knowledge or increasingly automated assistance and thus enable more malicious actors."(p. 30)

Supporting Evidence (2)

1.
"AI models have already been applied to accelerate scientific research. Weaponised, this capability could have serious security implications. For example, researchers were able to use an AI model to generate toxic molecules. Within hours, the model not only generated highly toxic molecules that were already known as chemical warfare agents, but also new molecules predicted to be even more toxic than some of the most lethal molecules known.159 Alpha Fold, a protein-structure-prediction model developed by DeepMind, predicted the structure for most proteins known to science.160 Another AI system based on a general purpose AI model was able to design completely new and functional protein structures161, a process that traditionally was highly time- and labour-intensive."(p. 30)
2.
"Given these models’ abilities to autonomously conduct experiments and research, laypeople could gain easier access to dangerous information and assistance in developing biological weapons. Even without a model acting increasingly autonomously, OpenAI acknowledges potential threats stemming from “GPT-4’s ability to generate publicly accessible but difficult-to- find information, shortening the time users spend on research and compiling this information in a way that is understandable to a non-expert user”163."(p. 31)

Part of Misuse Risks

Other risks from Maham & Küspert (2023) (10)