Purposeful or malicious harm
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"EAI systems present distinct physical risks due to their embodiment in the physical world. EAI technologies have already been designed and deployed with lethal intent, such as AI-controlled drones [52, 53]. However, fully autonomous military robots, often integrated with bespoke AI architectures [54, 55], are not yet widely used in combat. While highly or fully autonomous warfare is distinctly possible in the future [56], immediate risks arise from commercially available EAI systems, including AI-controlled quadrupeds and autonomous driving assistants."(p. 4)
Supporting Evidence (1)
"Recent research has demonstrated that these systems inherit jailbreaking vulnerabilities from LLM-based AI models [57–60]. This could allow malicious actors to subvert safety guardrails and perform a range of harmful and irreversible physical tasks, including detonating explosives and deliberately causing human collisions [61–63]. VLAs exacerbate this risk: an attacker might craft a visual scene or textual instruction that, when interpreted through a language-action policy, yields physically dangerous instructions not anticipated by vision- or language-only defenses [64, 65]."
Other risks from Perlo et al. (2025) (12)
Economic Risks
6.0 Socioeconomic & EnvironmentalAccidental harm
7.3 Lack of capability or robustnessPrivacy Violations
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationMisinformation
3.1 False or misleading informationLabour Displacement
6.2 Increased inequality and decline in employment qualitySocioeconomic Inequality
6.2 Increased inequality and decline in employment quality