This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Practices for running and protecting AI systems in production, including deployment, monitoring, incident response, and security controls.
Also in Organisation
Follow organization-approved IT processes and procedures to deploy the AI system in an approved manner, ensuring the following controls are implemented.
Reasoning
Organization deploys AI system following approved IT procedures and operational controls.
Enforce strict access controls
Prevent unauthorized access or tampering with the AI model. Apply role-based access controls (RBAC), or preferably attribute-based access controls (ABAC) where feasible, to limit access to authorized personnel only. Distinguish between users and administrators. Require MFA and privileged access workstations (PAWs) for administrative access [CPG 2.H].
1.2.4 Security InfrastructureEnsure user awareness and training
Educate users, administrators, and developers about security best practices, such as strong password management, phishing prevention, and secure data handling. Promote a security-aware culture to minimize the risk of human error. If possible, use a credential management system to limit, manage, and monitor credential use to minimize risks further [CPG 2.I].
2.4.4 Training & AwarenessConduct audits and penetration testing
Engage external security experts to conduct audits and penetration testing on ready-to-deploy AI systems. This helps identify vulnerabilities and weaknesses that may have been overlooked internally. [13], [15]
2.2.3 Auditing & ComplianceImplement robust logging and monitoring
Monitor the system’s behavior, inputs, and outputs with robust monitoring and logging mechanisms to detect any abnormal behavior or potential security incidents [CPG 3.A]. [16] Watch for data drift or high frequency or repetitive inputs (as these could be signs of model compromise or automated compromise attempts). [17] Establish alert systems to notify administrators of potential oracle-style adversarial compromise attempts, security breaches, or anomalies. Timely detection and response to cyber incidents are critical in safeguarding AI systems. [18]
2.3.3 Monitoring & LoggingUpdate and patch regularly
When updating the model to a new/different version, run a full evaluation to ensure that accuracy, performance, and security tests are within acceptable limits before redeploying.
2.2.2 Testing & EvaluationPrepare for high availability (HA) and disaster recovery (DR)
Use an immutable backup storage system, depending on the requirements of the system, to ensure that every object, especially log data, is immutable and cannot be changed [CPG 2.R]. [2]
1.2.4 Security InfrastructurePlan secure delete capabilities
Perform autonomous and irretrievable deletion of components, such as training and validation models or cryptographic keys, without any retention or remnants at the completion of any process where data and models are exposed or accessible. [19]
1.2.4 Security InfrastructureSecure the deployment environment
Organizations typically deploy AI systems within existing IT infrastructure. Before deployment, they should ensure that the IT environment applies sound security principles, such as robust governance, a well-designed architecture, and secure configurations. For example, ensure that the person responsible and accountable for AI system cybersecurity is the same person responsible and accountable for the organization’s cybersecurity in general [CPG 1.B]. The security best practices and requirements for IT environments apply to AI systems, too. The following best practices are particularly important to apply to the AI systems and the IT environments the organization deploys them in.
2.3.2 Access & Security ControlsSecure the deployment environment > Manage deployment environment governance
If an organization outside of IT is deploying or operating the AI system, work with the IT service department to identify the deployment environment and confirm it meets the organization’s IT standards. - Understand the organization’s risk level and ensure that the AI system and its use is within the organization’s risk tolerance overall and within the risk tolerance for the specific IT environment hosting the AI system. Assess and document applicable threats, potential impacts, and risk acceptance. [3], [4] - Identify the roles and responsibilities for each stakeholder along with how they are accountable for fulfilling them; identifying these stakeholders is especially important should the organization manage their IT environment separately from their AI system. - Identify the IT environment’s security boundaries and how the AI system fits within them. Require the primary developer of the AI system to provide a threat model for their system. - The AI system deployment team should leverage the threat model as a guide to implement security best practices, assess potential threats, and plan mitigations. [5], [6] Consider deployment environment security requirements when developing contracts for AI system products or services. Promote a collaborative culture for all parties involved, including the data science, infrastructure, and cybersecurity teams in particular, to allow for teams to voice any risks or concerns and for the organization to address them appropriately.
2.3.2 Access & Security ControlsSecure the deployment environment > Ensure a robust deployment environment architecture
Establish security protections for the boundaries between the IT environment and the AI system [CPG 2.F]. Identify and address blind spots in boundary protections and other security-relevant areas in the AI system the threat model identifies. For example, ensure the use of an access control system for the AI model weights and limit access to a set of privileged users with two-person control (TPC) and two-person integrity (TPI) [CPG 2.E]. Identify and protect all proprietary data sources the organization will use in AI model training or fine-tuning. Examine the list of data sources, when available, for models trained by others. Maintaining a catalog of trusted and valid data sources will help protect against potential data poisoning or backdoor attacks. For data acquired from third parties, consider contractual or service level agreement (SLA) stipulations as recommended by CPG 1.G and CPG 1.H. Apply secure by design principles and Zero Trust (ZT) frameworks to the architecture to manage risks to and from the AI system. [7], [8], [9]
2.3.2 Access & Security ControlsSecure the deployment environment > Harden deployment environment configurations
Apply existing security best practices to the deployment environment. This includes sandboxing the environment running ML models within hardened containers or virtual machines (VMs) [CPG 2.E], monitoring the network [CPG 2.T], configuring firewalls with allow lists [CPG 2.F], and other best practices, such as those in NSA’s Top Ten Cloud Mitigation Strategies for cloud deployments. Review hardware vendor guidance and notifications (e.g., for GPUs, CPUs, memory) and apply software patches and updates to minimize the risk of exploitation of vulnerabilities, preferably via the Common Security Advisory Framework (CSAF). [10] Secure sensitive AI information (e.g., AI model weights, outputs, and logs) by encrypting the data at rest, and store encryption keys in a hardware security module (HSM) for later on-demand decryption [CPG 2.L]. Implement strong authentication mechanisms, access controls, and secure communication protocols, such as by using the latest version of Transport Layer Security (TLS) to encrypt data in transit [CPG 2.K]. Ensure the use of phishing-resistant multifactor authentication (MFA) for access to information and services. [2] Monitor for and respond to fraudulent authentication attempts [CPG 2.H]. [11] Understand and mitigate how malicious actors exploit weak security controls by following the mitigations in Weak Security Controls and Practices Routinely Exploited for Initial Access.
2.3.2 Access & Security ControlsSecure the deployment environment > Protect deployment networks from threats
Adopt a ZT mindset, which assumes a breach is inevitable or has already occurred. Implement detection and response capabilities, enabling quick identification and containment of compromises. [8], [9] Use well-tested, high-performing cybersecurity solutions to identify attempts to gain unauthorized access efficiently and enhance the speed and accuracy of incident assessments [CPG 2.G]. Integrate an incident detection system to help prioritize incidents [CPG 3.A]. Also integrate a means to immediately block access by users suspected of being malicious or to disconnect all inbound connections to the AI models and systems in case of a major incident when a quick response is warranted.
2.3.2 Access & Security ControlsContinuously protect the AI system
Models are software, and, like all other software, may have vulnerabilities, other weaknesses, or malicious code or properties.
2.3.2 Access & Security ControlsDeploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems
U.S. National Security Agency’s Artificial Intelligence Security Center (AISC), the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC-UK) (2024)
Deploying artificial intelligence (AI) systems securely requires careful setup and configuration that depends on the complexity of the AI system, the resources required (e.g., funding, technical expertise), and the infrastructure used (i.e., on premises, cloud, or hybrid). This report expands upon the ‘secure deployment’ and ‘secure operation and maintenance’ sections of the Guidelines for secure AI system development and incorporates mitigation considerations from Engaging with Artificial Intelligence (AI). It is for organizations deploying and operating AI systems designed and developed by another entity. The best practices may not be applicable to all environments, so the mitigations should be adapted to specific use cases and threat profiles.
Other (multiple stages)
Applies across multiple lifecycle stages
Deployer
Entity that integrates and deploys the AI system for end users
Other
Risk management function not captured by the standard AIRM categories