Requires endorsing States to ensure military AI use aligns with international law. Mandates oversight, transparency, and rigorous testing for military AI systems. Urges minimizing bias, implementing safeguards, clear system functions, and staff training. Promotes public commitment and international cooperation for responsible AI deployment.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a non-binding political declaration with voluntary commitments by endorsing States. The document uses predominantly voluntary language ('should') and relies on state commitment rather than legal enforcement mechanisms.
The document has good coverage of approximately 6-8 subdomains, with strong focus on AI system security (2.2), malicious actors and weapons development (4.2), competitive dynamics (6.4), governance failure (6.5), goal misalignment (7.1), dangerous capabilities (7.2), and lack of robustness (7.3). Coverage is concentrated in security, military misuse prevention, and AI safety domains specific to military applications.
This document exclusively governs the National Security sector, specifically addressing military AI capabilities, autonomous weapons systems, and AI use in armed conflict. It does not regulate AI use in any civilian economic sectors.
The document explicitly covers all stages of the AI lifecycle for military AI capabilities, with particular emphasis on development, deployment, testing/validation, and operational monitoring. It repeatedly references measures 'throughout the life cycle of military AI capabilities' and 'across their entire life-cycles.'
The document explicitly mentions AI capabilities, autonomous functions and systems, and weapon systems incorporating AI. It provides definitions of artificial intelligence and autonomy. It does not specifically mention frontier AI, general purpose AI, foundation models, generative AI, predictive AI, open-weight models, or compute thresholds.
No specific proposing entity is named in the document. The declaration appears to be a multinational initiative among endorsing States.
The document refers to 'endorsing States' collectively without identifying a specific proposing entity or organization that drafted the declaration.
Individual endorsing States (self-enforcement)
The declaration relies on self-enforcement by each endorsing State. There is no external enforcement body specified; rather, States commit to implementing measures within their own military organizations.
Individual endorsing States (self-monitoring) and peer States through public disclosure
Monitoring is primarily through self-assessment by States and transparency through public disclosure of commitments. The declaration establishes continued discussions among endorsing States as an informal monitoring mechanism.
States and their military organizations that develop, deploy, or use military AI capabilities
The declaration explicitly targets States and their military organizations that are developing or using military AI capabilities, including autonomous weapons systems. States act as both governance actors and as developers/deployers of military AI.
12 subdomains (6 Good, 6 Minimal)