AI may be used to gain a political or strategic advantage or to cause harm at scale through cyber operations or the development and use of weapons. Advancements in AI have provided malicious actors with powerful tools that can lead to more frequent, more severe, and more precise cyber attacks. Hackers could use the coding abilities of AI assistants to develop malicious malware more effectively and at lower cost. With AI, even those with limited coding and technical experience could teach a model to produce and optimize malware code that discovers and exploits system vulnerabilities, including both self-replicating and automated software.
The development and application of weapons could also be sped up and intensified through AI. For example, AIs with specialized knowledge of bioengineering could make it easier for more actors to design new bioweapons. In 2022 a small pharmaceutical company used generative AI to develop 40,000 chemical nerve agents in less than six hours.
AI could also enable autonomous devices, such as drones, to be used as weapons. AI has already assisted in the development and application of Lethal Autonomous Weapons Systems (LAWS) – weapons that can operate without human oversight and use computer algorithms to identify and attack targets. Overall, AI's ability to process vast amounts of data quickly may empower actors to act on a much larger scale than would otherwise be possible. AI can manage multiple attack vectors simultaneously, coordinating them to maximize disruption and harm.
Excerpt from the MIT AI Risk Repository full report
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
NBC News discovered that OpenAI's ChatGPT models could be jailbroken using simple prompts to generate instructions for creating weapons of mass destruction, including biological, chemical, and nuclear weapons, despite safety guardrails.
Developers: OpenAI
Deployers: OpenAI
Cybercriminals used Anthropic's Claude AI system to conduct sophisticated cyberattacks including large-scale data extortion, fraudulent employment schemes, and ransomware development, with the AI making autonomous tactical and strategic decisions throughout the attack lifecycle.
Developers: Anthropic
Deployers: Unknown Cybercriminals, Ransomware As A Service Actors, North Korean IT Operatives
APT28 (Fancy Bear) deployed LAMEHUG malware that integrated the Qwen2.5-Coder-32B-Instruct large language model to dynamically generate system reconnaissance and data exfiltration commands, targeting Ukrainian government officials through phishing emails in July 2025.
Developers: Alibaba, Hugging Face
Deployers: Apt28, Fancy Bear
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
217 shared governance docs
AI developers or state-like actors competing in an AI ‘race’ by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
180 shared governance docs
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
165 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
151 shared governance docs
Authorize the Secretary of Defense to establish AI Institutes focused on national security. Directs support for interdisciplinary AI research, partnership, innovation ecosystems, and workforce development.
Establishes the Artificial Intelligence Futures Steering Committee by April 1, 2026, under the Secretary of Defense. Directs it to develop policies for AI adoption, assess AI trajectories, and analyze AI risks and adversary developments. Requires quarterly meetings and a report to U.S. Congress by January 31, 2027.
Requires the Secretary of Defense to develop a cybersecurity policy for AI/ML systems no later than 180 days after the act is passed. Develop a comprehensive review of the effectiveness of the AI/ML policies. Addresses potential security risks, implements methods to mitigate those risks, and establishes standard policy. Requires a comprehensive report of the threats and cybersecurity measures by August 31, 2026.