This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Internal policies, content safety guidelines, and ethical design principles governing system creation.
Also in Engineering & Development
Your choice of AI model will involve balancing a range of requirements. This includes choice of model architecture, configuration, training data, training algorithm and hyperparameters. Your decisions are informed by your threat model, and are regularly reassessed as AI security research advances and understanding of the threat evolves.
When choosing an AI model, your considerations will likely include, but are not limited to: \> the complexity of the model you are using, that is, the chosen architecture and number of parameters; your model’s chosen architecture and number of parameters will, among other factors, affect how much training data it requires and how robust it is to changes in input data when in use \> the appropriateness of the model for your use case and/or feasibility of adapting it to your specific need (for example by fine-tuning) \> the ability to align, interpret and explain your model’s outputs (for example for debugging, audit or regulatory compliance); there may be benefits to using simpler, more transparent models over large and complex ones which are more difficult to interpret \> characteristics of training dataset(s), including size, integrity, quality, sensitivity, age, relevance and diversity \> the value of using model hardening (such as adversarial training), regularisation and/or privacy-enhancing techniques \> the provenance and supply chains of components including the model or foundation model, training data and associated tools For more information about how many of these factors impact security outcomes, refer to the NCSC’s ‘Principles for the Security of Machine Learning’, in particular Design for security (model architecture).
Reasoning
Establishes design principles and guidelines for model selection decisions informed by security and threat model considerations.
Secure design
This section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
2.4.2 Design StandardsSecure design > Raise staff awareness of threats and risks
System owners and senior leaders understand threats to secure AI and their mitigations. Your data scientists and developers maintain an awareness of relevant security threats and failure modes and help risk owners to make informed decisions. You provide users with guidance on the unique security risks facing AI systems (for example, as part of standard InfoSec training) and train developers in secure coding techniques and secure and responsible AI practices.
2.4.4 Training & AwarenessSecure design > Model the threats to your system
As part of your risk management process, you apply a holistic process to assess the threats to your system, which includes understanding the potential impacts to the system, users, organisations, and wider society if an AI component is compromised or behaves unexpectedly. This process involves assessing the impact of AI-specific threats and documenting your decision making.
2.2.1 Risk AssessmentSecure design > Design your system for security as well as functionality and performance
You are confident that the task at hand is most appropriately addressed using AI. Having determined this, you assess the appropriateness of your AI-specific design choices. You consider your threat model and associated security mitigations alongside functionality, user experience, deployment environment, performance, assurance, oversight, ethical and legal requirements, among other considerations.
2.4.2 Design StandardsSecure development
This section contains guidelines that apply to the development stage of the AI system development lifecycle, including supply chain security, documentation, and asset and technical debt management.
2.4.3 Development WorkflowsSecure development > Secure your supply chain
You assess and monitor the security of your AI supply chains across a system’s life cycle, and require suppliers to adhere to the same standards your own organisation applies to other software. If suppliers cannot adhere to your organisation’s standards, you act in accordance with your existing risk management policies. Where not produced in-house, you acquire and maintain well-secured and well-documented hardware and software components (for example, models, data, software libraries, modules, middleware, frameworks, and external APIs) from verified commercial, open source, and other third-party developers to ensure robust security in your systems. You are ready to failover to alternate solutions for mission-critical systems, if security criteria are not met. You use resources like the NCSC’s Supply Chain Guidance and frameworks such as Supply Chain Levels for Software Artifacts (SLSA)10 for tracking attestations of the supply chain and software development life cycles.
2.3.2 Access & Security ControlsGuidelines for secure AI development
UK National Cyber Security Centre (NCSC); US Cybersecurity and Infrastructure Security Agency (CISA); National Security Agency (NSA); Federal Bureau of Investigation (FBI); Australian Signals Directorate's Australian Cyber Security Centre (ASD ACSC) (2023)
This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.
Plan and Design
Designing the AI system, defining requirements, and planning development
Deployer
Entity that integrates and deploys the AI system for end users
Map
Identifying and documenting AI risks, contexts, and impacts