Skip to main content

Cyber

Capabilities and Risks from Frontier AI

DSIT (2023)

Sub-category
Risk Domain

Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.

"As the programming abilities of AI systems continue to expand, frontier AI is likely to significantly exacerbate existing cyber risks. Most notably, AI systems can be used by potentially anyone to create faster paced, more effective and larger scale cyber intrusion via tailored phishing methods or replicating malware. Frontier AI’s effect on the overall balance between cyber offence and defence is uncertain, as these tools also have many applications in improving the cybersecurity of systems and defenders are mobilising significant resources to utilise frontier AI for defensive purposes.209 In the future, we may see AI systems both conducting and defending against cyberattacks with reduced human oversight at each step."(p. 23)

Supporting Evidence (4)

1.
"Frontier AI can upskill threat actors by advising on attack techniques, critiquing cyberattack plans, or finding relevant information about a target.210 Some models have measures to avoid supporting cyber criminals, but these are frequently circumvented through ‘jailbreaks’.211"(p. 23)
2.
"Frontier AI systems are saving skilled threat actors time. For example, AI systems have helped create computer viruses that change over time to avoid detection, which previously would have required significant time from experts.212 Users on underground hacking forums have claimed to be using tools like ChatGPT to help them recreate malware quickly in many different programming languages.213"(p. 24)
3.
"AI-enhanced social engineering is already being used by cybercriminals to conduct scams and steal login credentials, with systems that can gather intelligence on targets,214 impersonate voices of trusted contacts,215 and generate persuasive spear phishing messages.216 The risk is significant given most cyber attackers use social engineering to gain access to the victim organisation's networks.217"(p. 24)
4.
"Frontier AI developments may result in systems that can act on the internet to perform their own cyberattacks autonomously.222 Behaviours such as autonomous replication and self- improving exploit generation are of particular concern, and some work has started to look at how good today’s models are at these behaviours."(p. 24)

Part of Loss of control

Other risks from DSIT (2023) (12)