Cyber offence
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"Attackers are beginning to use general- purpose AI for offensive cyber operations, presenting growing but currently limited risks. Current systems have demonstrated capabilities in low- and medium- complexity cybersecurity tasks, with state- sponsored threat actors actively exploring AI to survey target systems. Malicious actors of varying skill levels can leverage these capabilities against people, organisations, and critical infrastructure such as power grids."(p. 72)
Supporting Evidence (4)
"Cyber risk arises because general- purpose AI enables rapid and parallel operations at scale and lowers technical barriers. While expert knowledge is still essential, AI tools reduce the human effort and knowledge needed to survey target systems and gain unauthorised access."(p. 72)
"General- purpose AI offers significant dual- use cyber capabilities. Evidence indicates that general- purpose AI could accelerate processes such as discovering vulnerabilities, which are essential for launching attacks as well as strengthening defences. However, resource constraints and regulations may prevent critical services and smaller organisations from adopting AI- enhanced defences. The ultimate impact of AI on the attacker- defender balance remains unclear."(p. 72)
"Offensive cyber operations typically involve designing and deploying malicious software (malware) and exploiting vulnerabilities in software and hardware systems, leading to severe security breaches. A standard attack chain begins with reconnaissance of the target system, followed by iterative discovery, exploitation of vulnerabilities, and additional information gathering. These actions demand careful planning and strategic execution to achieve the adversary's objectives while avoiding detection. Some experts are concerned that general- purpose AI could enhance these operations by automating vulnerability detection, optimising attack strategies, and improving evasion techniques (348, 349). These advanced capabilities would benefit all attackers. For instance, state actors could leverage them to target critical national infrastructure (CNI), resulting in widespread disruption and significant damage. At the same time, general- purpose AI could also be used defensively, for example to find and fix vulnerabilities."(p. 73)
"General- purpose AI can assist with information- gathering tasks, thereby reducing human effort. For example, in ransomware attacks, malicious actors first manually conduct offensive reconnaissance and exploit vulnerabilities to gain entry to the target network, and then release malware that spreads without human intervention (350). The entry phase is often technically challenging and prone to failure. General- purpose AI is being explored by state- sponsored attackers as an aid to speed up the process (351*, 352*). However, while there are general- purpose systems that have performed vulnerability discovery autonomously (see next paragraphs), published systems have not yet autonomously executed real- world intrusions into networks and systems – tasks that are inherently more complex."(p. 73)
Part of Risks from malicious use
Other risks from Bengio2025 (13)
Risks from malicious use
4.0 Malicious Actors & MisuseRisks from malicious use > Harm to individuals through fake content
4.3 Fraud, scams, and targeted manipulationRisks from malicious use > Manipulation of public opinion
4.1 Disinformation, surveillance, and influence at scaleRisks from malicious use > Biological and chemical attacks
4.2 Cyberattacks, weapon development or use, and mass harmReliability issues
7.3 Lack of capability or robustnessBias
1.1 Unfair discrimination and misrepresentation