Regulates MOD's adoption and deployment of AI to align with democratic values, safety, and responsibility. Specifies requirements for AI in Defence, including holistic risk management, legal compliance, ethical principles, reliability, bias mitigation, security, and human-AI teaming. Guides AI governance, lifecycle management, and supplier cooperation. Encourages pragmatic approach to scope and international collaboration.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding internal military directive (JSP - Joint Service Publication) with mandatory requirements, enforcement mechanisms, and clear accountability structures. The document uses mandatory language throughout ('must', 'shall') and establishes formal governance, approval, and compliance mechanisms including ministerial oversight and parliamentary accountability.
The document has good coverage of approximately 12-14 subdomains, with strong focus on AI system security (2.2), malicious actors (4.1, 4.2), governance failure (6.5), AI safety failures (7.1, 7.2, 7.3, 7.4), and multi-agent risks (7.6). Coverage is concentrated in security, misuse prevention, AI safety domains, and governance. There is minimal coverage of discrimination/toxicity and misinformation domains.
This document primarily governs AI use within the National Security sector (military defence operations and capabilities). It also has secondary coverage of several support sectors including Professional and Technical Services, Scientific Research and Development Services, Information (for digital systems and data processing), and Management/Administrative Services that support defence operations.
The document comprehensively covers all AI lifecycle stages from planning through to operation and monitoring. It provides detailed requirements for each stage including planning, requirements, architecture, algorithm design and implementation, data collection and preparation, model development, verification and validation, integration and deployment, and ongoing monitoring. The document emphasizes through-life management and continuous assurance.
The document explicitly covers AI models, AI systems, and various AI types. It provides a general characterization of AI as 'Machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks.' The document addresses both general-purpose and task-specific applications, mentions generative AI (including Large Language Models), and discusses machine learning extensively. It does not use compute thresholds or explicitly discuss open-weight models, though it addresses third-party software and open source tools.
UK Ministry of Defence (MOD), Defence AI Unit (DAU), Defence AI Centre (DAIC)
The document is a Joint Service Publication (JSP) issued by the UK Ministry of Defence. The Defence AI Unit (DAU) is identified as responsible for setting AI policy in relation to ethics, and the Defence AI Centre (DAIC) provides technical guidance and direction.
TLB Holders, Chief Executives, Responsible AI Senior Officers (RAISOs), Defence AI Unit (DAU), Defence AI Centre (DAIC), Joint Requirements Oversight Committee/Investments Approvals Committee (JROC/IAC), Ministers, 2PUS (Second Permanent Under Secretary)
The document establishes a clear governance hierarchy with enforcement responsibilities. TLB Holders and Chief Executives are responsible for issuing direction and ensuring compliance. RAISOs ensure appropriate assurance is in place. High-risk projects must be referred to JROC/IAC or Ministers for approval.
TLB Executive Boards, Responsible AI Senior Officers (RAISOs), Defence AI Unit (DAU), 2PUS (Second Permanent Under Secretary), Independent Ethics Assurance mechanisms, Ethics Managers, Quality Assurance Managers, Safety Managers
The document establishes monitoring through annual assurance statements, ongoing risk management, and dedicated monitoring roles. TLB Executive Boards must provide annual Statements of AI Ethical Assurance. RAISOs oversee ethical development and use. Independent Ethics Assurance mechanisms are required for significant projects.
All MOD staff, TLB Holders, Chief Executives, Commanding Officers, managers, providers of Science and Technology research, capability planners, capability sponsors, programme Senior Responsible Owners, requirements managers, Delivery Agents, Defence Line of Development owners, trials units/organisations, Specialist Engineering Functions, policy makers, end users, MOD suppliers and contractors
The JSP explicitly applies to all MOD staff across all phases of the AI lifecycle and identifies specific stakeholder groups. It also applies to suppliers delivering AI products to MOD, making them subject to the requirements.
18 subdomains (8 Good, 10 Minimal)