Skip to main content
Home/Risks/G'sell (2024)/Malicious use and abuse (military applications)

Malicious use and abuse (military applications)

Regulating under Uncertainty: Governance Options for Generative AI

G'sell (2024)

Sub-category
Risk Domain

Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.

"The advancement of AI for military purposes is rapidly ushering in a new phase of growth in military technology. Lethal Autonomous Weapons Systems (LAWS) possess the capability to detect, engage, and eliminate human targets independently, without human input.341 In 2020, a sophisticated AI agent surpassed experienced F-16 pilots in multiple simulated aerial combat scenarios, notably achieving a 5-0 victory against a human pilot through “aggressive and precise maneuvers” that the human could not surpass.342 Additionally, fully autonomous drones are already operational."(p. 77)

Supporting Evidence (1)

1.
Although it does not always directly involve generative AI, the deployment of advanced AI technologies by military forces raises significant concerns due to their enhanced capabilities and the potential implications these tools present.(p. 77)

Other risks from G'sell (2024) (33)