BackLethal Autonomous Weapons (LAW)
Lethal Autonomous Weapons (LAW)
Risk Domain
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
"What is debated as an ethical issue is the use of LAW — AI-driven weapons that fully autonomously take actions that intentionally kill humans."(p. 9)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Other risks from Hogenhout (2021) (12)
Incompetence
7.3 Lack of capability or robustnessAI systemUnintentionalPost-deployment
Loss of privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationHumanIntentionalPost-deployment
Discrimination
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPost-deployment
Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalPre-deployment
Erosion of Society
3.2 Pollution of information ecosystem and loss of consensus realityAI systemUnintentionalPost-deployment
Lack of transparency
7.4 Lack of transparency or interpretabilityAI systemUnintentionalOther