Physical Harm and Injury Risks
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"The integration of general-purpose AI models into embodied systems creates direct physical threats through malicious exploitation of autonomous decision-making capabilities in real-world environments. The risk lies in embodied models' capacity for autonomous action and real-world interaction, and when these capabilities are maliciously exploited they may trigger a series of serious consequences.18"(p. 6)
Supporting Evidence (1)
"For example, algorithms being hijacked leading to autonomous driving systems causing major traffic accidents, or compromised industrial robots triggering serious production safety incidents."(p. 6)
Other risks from SAIL & Concordia AI (2025) (36)
Misuse Risks
4.0 Malicious Actors & MisuseLoss of Control Risks
5.2 Loss of human agency and autonomyAccident Risks
7.3 Lack of capability or robustnessModel Capabilities
7.2 AI possessing dangerous capabilitiesCyber Offense Risks
4.2 Cyberattacks, weapon development or use, and mass harmBiological and Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harm