Skip to main content
Home/Risks/Perlo et al. (2025)/Purposeful or malicious harm

Purposeful or malicious harm

Embodied AI: Emerging Risks and Opportunities for Policy Action

Perlo et al. (2025)

Sub-category
Risk Domain

Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.

"EAI systems present distinct physical risks due to their embodiment in the physical world. EAI technologies have already been designed and deployed with lethal intent, such as AI-controlled drones [52, 53]. However, fully autonomous military robots, often integrated with bespoke AI architectures [54, 55], are not yet widely used in combat. While highly or fully autonomous warfare is distinctly possible in the future [56], immediate risks arise from commercially available EAI systems, including AI-controlled quadrupeds and autonomous driving assistants."(p. 4)

Supporting Evidence (1)

1.
"Recent research has demonstrated that these systems inherit jailbreaking vulnerabilities from LLM-based AI models [57–60]. This could allow malicious actors to subvert safety guardrails and perform a range of harmful and irreversible physical tasks, including detonating explosives and deliberately causing human collisions [61–63]. VLAs exacerbate this risk: an attacker might craft a visual scene or textual instruction that, when interpreted through a language-action policy, yields physically dangerous instructions not anticipated by vision- or language-only defenses [64, 65]."

Other risks from Perlo et al. (2025) (12)