Malicious use and abuse (military applications)
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"The advancement of AI for military purposes is rapidly ushering in a new phase of growth in military technology. Lethal Autonomous Weapons Systems (LAWS) possess the capability to detect, engage, and eliminate human targets independently, without human input.341 In 2020, a sophisticated AI agent surpassed experienced F-16 pilots in multiple simulated aerial combat scenarios, notably achieving a 5-0 victory against a human pilot through “aggressive and precise maneuvers” that the human could not surpass.342 Additionally, fully autonomous drones are already operational."(p. 77)
Supporting Evidence (1)
Although it does not always directly involve generative AI, the deployment of advanced AI technologies by military forces raises significant concerns due to their enhanced capabilities and the potential implications these tools present.(p. 77)
Other risks from G'sell (2024) (33)
Technical and operational risks
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - unexpected behaviour)
7.3 Lack of capability or robustnessTechnical and operational risks > Technical vulnerabilities (Robustness - vulnerability to jailbreaking
2.2 AI system security vulnerabilities and attacksTechnical and operational risks > Technical vulnerabilities (The risk of misalignment)
7.1 AI pursuing its own goals in conflict with human goals or valuesTechnical and operational risks > Factually incorrect content (inaccuracies and fabricated sources)
3.1 False or misleading informationTechnical and operational risks > Opacity (the black box problem)
7.4 Lack of transparency or interpretability