Skip to main content
BackLethal Autonomous Weapons Systems (LAWS)
Home/Risks/Habbal et al. (2024)/Lethal Autonomous Weapons Systems (LAWS)

Lethal Autonomous Weapons Systems (LAWS)

Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, Applications, Challenges and Future Research Directions

Habbal et al. (2024)

Sub-category
Risk Domain

Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.

LAWS are a distinctive category of weapon systems that employ sensor arrays and computer algorithms to detect and attack a target without direct human intervention in the system’s operation(p. 3)

Supporting Evidence (1)

1.
humans might lose the ability to foresee which individuals or entities could become the focus of an assault, or even elucidate the rationale behind a specific target selection made by a LAWS(p. 3)

Other risks from Habbal et al. (2024) (6)