This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Practices for running and protecting AI systems in production, including deployment, monitoring, incident response, and security controls.
Also in Organisation
This section contains guidelines that apply to the secure operation and maintenance stage of the AI system development life cycle. It provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.
Reasoning
Organizational guidelines for runtime logging, monitoring, and maintenance of deployed AI systems.
Monitor your system’s behaviour
You measure the outputs and performance of your model and system such that you can observe sudden and gradual changes in behaviour affecting security. You can account for and identify potential intrusions and compromises, as well as natural data drift.
2.3.3 Monitoring & LoggingMonitor your system’s inputs
In line with privacy and data protection requirements, you monitor and log inputs to your system (such as inference requests, queries or prompts) to enable compliance obligations, audit, investigation and remediation in the case of compromise or misuse. This could include explicit detection of out-of-distribution and/or adversarial inputs, including those that aim to exploit data preparation steps (such as cropping and resizing for images).
1.2.3 Monitoring & DetectionFollow a secure by design approach to updates
You include automated updates by default in every product and use secure, modular update procedures to distribute them. Your update processes (including testing and evaluation regimes) reflect the fact that changes to data, models or prompts can lead to changes in system behaviour (for example, you treat major updates like new versions). You support users to evaluate and respond to model changes (for example by providing preview access and versioned APIs).
2.3.1 Deployment ManagementCollect and share lessons learned
You participate in information-sharing communities, collaborating across the global ecosystem of industry, academia and governments to share best practice as appropriate. You maintain open lines of communication for feedback regarding system security, both internally and externally to your organisation, including providing consent to security researchers to research and report vulnerabilities. When needed, you escalate issues to the wider community, for example publishing bulletins responding to vulnerability disclosures, including detailed and complete common vulnerability enumeration. You take action to mitigate and remediate issues quickly and appropriately.
3.3.1 Industry CoordinationSecure design
This section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
2.4.2 Design StandardsSecure design > Raise staff awareness of threats and risks
System owners and senior leaders understand threats to secure AI and their mitigations. Your data scientists and developers maintain an awareness of relevant security threats and failure modes and help risk owners to make informed decisions. You provide users with guidance on the unique security risks facing AI systems (for example, as part of standard InfoSec training) and train developers in secure coding techniques and secure and responsible AI practices.
2.4.4 Training & AwarenessSecure design > Model the threats to your system
As part of your risk management process, you apply a holistic process to assess the threats to your system, which includes understanding the potential impacts to the system, users, organisations, and wider society if an AI component is compromised or behaves unexpectedly. This process involves assessing the impact of AI-specific threats and documenting your decision making.
2.2.1 Risk AssessmentSecure design > Design your system for security as well as functionality and performance
You are confident that the task at hand is most appropriately addressed using AI. Having determined this, you assess the appropriateness of your AI-specific design choices. You consider your threat model and associated security mitigations alongside functionality, user experience, deployment environment, performance, assurance, oversight, ethical and legal requirements, among other considerations.
2.4.2 Design StandardsSecure design > Consider security benefits and trade-offs when selecting your AI model
Your choice of AI model will involve balancing a range of requirements. This includes choice of model architecture, configuration, training data, training algorithm and hyperparameters. Your decisions are informed by your threat model, and are regularly reassessed as AI security research advances and understanding of the threat evolves.
2.4.2 Design StandardsSecure development
This section contains guidelines that apply to the development stage of the AI system development lifecycle, including supply chain security, documentation, and asset and technical debt management.
2.4.3 Development WorkflowsGuidelines for secure AI development
UK National Cyber Security Centre (NCSC); US Cybersecurity and Infrastructure Security Agency (CISA); National Security Agency (NSA); Federal Bureau of Investigation (FBI); Australian Signals Directorate's Australian Cyber Security Centre (ASD ACSC) (2023)
This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks