Skip to main content
BackFailures in or misuse of intermediary (non-AGI) AI systems, resulting in catastrophe
Home/Risks/Maas (2023)/Failures in or misuse of intermediary (non-AGI) AI systems, resulting in catastrophe

Failures in or misuse of intermediary (non-AGI) AI systems, resulting in catastrophe

Advancing AI Governance: A Literature Review of Problems, Options, and Proposals

Maas (2023)

Sub-category
Risk Domain

Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.

"Deployment of “prepotent” AI systems that are non-general but capable of outperforming human collective efforts on various key dimensions;170 → Militarization of AI enabling mass attacks using swarms of lethal autonomous weapons systems;171 → Military use of AI leading to (intentional or unintentional) nuclear escalation, either because machine learning systems are directly integrated in nuclear command and control systems in ways that result in escalation172 or because conventional AI-enabled systems (e.g., autonomous ships) are deployed in ways that result in provocation and escalation;173 → Nuclear arsenals serving as an arsenal “overhang” for advanced AI systems;174 → Use of AI to accelerate research into catastrophically dangerous weapons (e.g., bioweapons);175"(p. 33)

Other risks from Maas (2023) (25)