Skip to main content

Cyber offence

International Scientific Report on the Safety of Advanced AI

Bengio et al. (2024)

Sub-category
Risk Domain

Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.

"General- purpose AI systems could uplift the cyber expertise of individuals, making it easier for malicious users to conduct effective cyber- attacks, as well as providing a tool that can be used in cyber defence. General- purpose AI systems can be used to automate and scale some types of cyber operations, such as social engineering attacks."(p. 44)

Supporting Evidence (2)

1.
"General- purpose AI systems can exacerbate existing cybersecurity risks in several ways. Firstly, they may lower the barrier to entry of more sophisticated cyber attacks, so the number of people capable of such attacks might increase. Secondly, general- purpose AI systems could be used to scale offensive cyber operations, through increasing levels of automation and efficiency."(p. 44)
2.
"General- purpose AI systems reduce the cost, technical know- how, and expertise needed to conduct cyber- attacks. Offensive cyber operations include designing and spreading malicious software as well as discovering and exploiting vulnerabilities in critical systems. They can lead to significant security breaches, for example in critical national infrastructure (CNI), and pose a threat to public safety and security. Given the labour- intensive nature of these operations, advanced general- purpose AI that automates certain aspects of the process, reducing the number of experts needed and lowering the required level of expertise, could be useful for attackers."(p. 44)

Part of Malicious Use Risks

Other risks from Bengio et al. (2024) (13)