Impacts of AI (Cyberattacks)
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
-
Sub-categories (4)
Automated discovery and exploitation of software systems
"GPAIs can be used to aid in the automated discovery of software vulnerabilities [33]. This can empower malicious actors, making their cyberattacks more effi- cient and potentially more damaging. This type of automation allows attackers to expand the scale of their operations at a low cost, increasing the impact of their actions. New malware can be developed automatically, or the known vulnerabilities can be exploited to create more sophisticated attacks."
4.2 Cyberattacks, weapon development or use, and mass harmAmplification of cyberattacks
"General-purpose AI models may significantly enhance the magnitude and ef- fectiveness of cyberattacks, by amplifying existing capabilities or resources of malicious actors [3]. For example, GPAI models may be employed to: • Automatically scan open-source codebases and compiled binaries for po- tential vulnerabilities • Apply known exploits flexibly and at scale (e.g., identifying vulnerable computers based on subtle cues in response times or output formats) • Assist with different aspects of cyberattacks, including planning, recon- naissance, exploit searching, remote control, malware implementation, and data exfiltration • Combine social engineering (phishing, deepfakes, etc.) with cyberattacks at scale."
4.2 Cyberattacks, weapon development or use, and mass harmAI-driven spear phishing attacks
"Generative models can be misused to target individual users more efficiently by using personalized information [23]. Highly convincing automated fraudulent schemes can exploit the trust of victims by extracting sensitive data and making the deception more likely to succeed. For example, in LLMs, this misuse can be aided by jailbreaking techniques [178]."
4.3 Fraud, scams, and targeted manipulationModels generating code with security vulnerabilities
"Models can generate code or coding suggestions that contain security vulner- abilities. This may occur across various LLM-based model families, including more advanced models with superior coding performance, where the tendency to produce insecure code is even more pronounced [26]."
7.3 Lack of capability or robustnessOther risks from Gipiškis2024 (144)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content