BackLethal Autonomous Weapons Systems (LAWS)
Lethal Autonomous Weapons Systems (LAWS)
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
LAWS are a distinctive category of weapon systems that employ sensor arrays and computer algorithms to detect and attack a target without direct human intervention in the system’s operation(p. 3)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (1)
1.
humans might lose the ability to foresee which individuals or entities could become the focus of an assault, or even elucidate the rationale behind a specific target selection made by a LAWS(p. 3)
Other risks from Habbal et al. (2024) (6)
Bias and Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Privacy Invasion
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalPost-deployment
Society Manipulation
4.1 Disinformation, surveillance, and influence at scaleAI systemIntentionalPost-deployment
Deepfake Technology
4.1 Disinformation, surveillance, and influence at scaleHumanIntentionalPost-deployment
Malicious Use of AI
4.3 Fraud, scams, and targeted manipulationHumanIntentionalPost-deployment
Insufficient Security Measures
2.2 AI system security vulnerabilities and attacksHumanIntentionalPost-deployment