Emphasizes AI security training for all staff and tailored secure coding for developers. Requires secure design and auditing against AI attacks. Mandates threat evaluation and risk management for AI systems. Ensures human responsibility and oversight in AI systems. Prioritizes protection and monitoring of AI assets and data integrity. Requires developers to document AI system data and model specifics, test security rigorously, and update systems for security. Demands secure end-of-life data disposal processes.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a voluntary Code of Practice with predominantly mandatory language ('shall') but no formal enforcement mechanisms, penalties, or sanctions. It relies on voluntary adherence and industry best practices.
The document has strong coverage of approximately 8-10 subdomains, with primary focus on AI system security (2.2), malicious actors and cyberattacks (4.2), AI safety failures including lack of robustness (7.3), lack of transparency (7.4), and multi-agent risks (7.6). It also addresses governance failure (6.5) through security governance frameworks, and competitive dynamics (6.4) implicitly through secure development practices.
This is a cross-sectoral Code of Practice that applies broadly to all organizations developing, deploying, or operating AI systems regardless of industry. The document does not specify particular economic sectors but provides universal security guidance applicable across all sectors where AI is used.
The document comprehensively covers all stages of the AI lifecycle from planning and design through end-of-life disposal. It emphasizes secure design (Principles 1-4), secure development (Principles 5-9), secure deployment (Principle 10), secure maintenance (Principles 11-12), and secure end-of-life (Principle 13).
The document extensively covers AI systems and AI models throughout. It does not explicitly define or mention frontier AI, general purpose AI, task-specific AI, foundation models, generative AI, predictive AI, open-weight models, or specific compute thresholds. The focus is on AI systems broadly defined with security considerations.
Government of the United Kingdom
The document is explicitly authored by the Government of the United Kingdom as stated in the document information header.
No enforcement body, agency, or authority is specified in the document. As a Code of Practice, it appears to rely on voluntary compliance without formal enforcement mechanisms.
No specific monitoring body or oversight agency is identified in the document. While the document requires organizations to monitor their own systems, no external monitoring authority is designated.
Developers; System Operators; Data Custodians; End-users
The document explicitly targets multiple actor types throughout all principles. Developers are those who build AI systems, System Operators are those who deploy and operate them, Data Custodians manage data, and End-users are those who use the systems. The document applies primarily to Developers and System Operators as indicated in each principle.
10 subdomains (5 Good, 5 Minimal)