This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Internal policies, content safety guidelines, and ethical design principles governing system creation.
Also in Engineering & Development
This section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
Reasoning
Establishes design guidelines for threat modeling and secure design principles governing AI system development lifecycle.
Raise staff awareness of threats and risks
System owners and senior leaders understand threats to secure AI and their mitigations. Your data scientists and developers maintain an awareness of relevant security threats and failure modes and help risk owners to make informed decisions. You provide users with guidance on the unique security risks facing AI systems (for example, as part of standard InfoSec training) and train developers in secure coding techniques and secure and responsible AI practices.
2.4.4 Training & AwarenessModel the threats to your system
As part of your risk management process, you apply a holistic process to assess the threats to your system, which includes understanding the potential impacts to the system, users, organisations, and wider society if an AI component is compromised or behaves unexpectedly. This process involves assessing the impact of AI-specific threats and documenting your decision making.
2.2.1 Risk AssessmentDesign your system for security as well as functionality and performance
You are confident that the task at hand is most appropriately addressed using AI. Having determined this, you assess the appropriateness of your AI-specific design choices. You consider your threat model and associated security mitigations alongside functionality, user experience, deployment environment, performance, assurance, oversight, ethical and legal requirements, among other considerations.
2.4.2 Design StandardsConsider security benefits and trade-offs when selecting your AI model
Your choice of AI model will involve balancing a range of requirements. This includes choice of model architecture, configuration, training data, training algorithm and hyperparameters. Your decisions are informed by your threat model, and are regularly reassessed as AI security research advances and understanding of the threat evolves.
2.4.2 Design StandardsSecure development
This section contains guidelines that apply to the development stage of the AI system development lifecycle, including supply chain security, documentation, and asset and technical debt management.
2.4.3 Development WorkflowsSecure development > Secure your supply chain
You assess and monitor the security of your AI supply chains across a system’s life cycle, and require suppliers to adhere to the same standards your own organisation applies to other software. If suppliers cannot adhere to your organisation’s standards, you act in accordance with your existing risk management policies. Where not produced in-house, you acquire and maintain well-secured and well-documented hardware and software components (for example, models, data, software libraries, modules, middleware, frameworks, and external APIs) from verified commercial, open source, and other third-party developers to ensure robust security in your systems. You are ready to failover to alternate solutions for mission-critical systems, if security criteria are not met. You use resources like the NCSC’s Supply Chain Guidance and frameworks such as Supply Chain Levels for Software Artifacts (SLSA)10 for tracking attestations of the supply chain and software development life cycles.
2.3.2 Access & Security ControlsSecure development > Identify, track and protect your assets
You understand the value to your organisation of your AI-related assets, including models, data (including user feedback), prompts, software, documentation, logs and assessments (including information about potentially unsafe capabilities and failure modes), recognising where they represent significant investment and where access to them enables an attacker. You treat logs as sensitive data and implement controls to protect their confidentiality, integrity and availability. You know where your assets reside and have assessed and accepted any associated risks. You have processes and tools to track, authenticate, version control and secure your assets, and can restore to a known good state in the event of compromise. You have processes and controls in place to manage what data AI systems can access, and to manage content generated by AI according to its sensitivity (and the sensitivity of the inputs that went into generating it).
2.3.2 Access & Security ControlsSecure development > Document your data, models and prompts
You document the creation, operation, and life cycle management of any models, datasets and metaor system-prompts. Your documentation includes security-relevant information such as the sources of training data (including fine-tuning data and human or other operational feedback), intended scope and limitations, guardrails, cryptographic hashes or signatures, retention time, suggested review frequency and potential failure modes.
2.2.4 Assurance DocumentationSecure development > Manage your technical debt
As with any software system, you identify, track and manage your ‘technical debt’ throughout an AI system’s life cycle (technical debt is where engineering decisions that fall short of best practices to achieve short-term results are made, at the expense of longer-term benefits). Like financial debt, technical debt is not inherently bad, but should be managed from the earliest stages of development. You recognise that doing so can be more challenging in an AI context than for standard software, and that your levels of technical debt are likely to be high due to rapid development cycles and a lack of well-established protocols and interfaces. You ensure your life cycle plans (including processes to decommission AI systems) assess, acknowledge and mitigate risks to future similar systems.
2.4.3 Development WorkflowsSecure deployment
This section contains guidelines that apply to the deployment stage of the AI system development life cycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes, and responsible release.
2.3 Operations & SecurityGuidelines for secure AI development
UK National Cyber Security Centre (NCSC); US Cybersecurity and Infrastructure Security Agency (CISA); National Security Agency (NSA); Federal Bureau of Investigation (FBI); Australian Signals Directorate's Australian Cyber Security Centre (ASD ACSC) (2023)
This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.
Plan and Design
Designing the AI system, defining requirements, and planning development
Deployer
Entity that integrates and deploys the AI system for end users
Map
Identifying and documenting AI risks, contexts, and impacts