This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Response plans, escalation procedures, and incident management for operational emergencies.
Also in Operations & Security
Reasoning
Establishes contingency processes to handle failures or incidents in high-risk third-party systems post-deployment.
Document GAI risks associated with system value chain to identify over-reliance on third-party data and to identify fallbacks.
2.2.1 Risk AssessmentDocument incidents involving third-party GAI data and systems, including opendata and open-source software
2.3.3 Monitoring & LoggingEstablish incident response plans for third-party GAI technologies: Align incident response plans with impacts enumerated in MAP 5.1; Communicate third-party GAI incident response plans to all relevant AI Actors; Define ownership of GAI incident response functions; Rehearse third-party GAI incident response plans at a regular cadence; Improve incident response plans based on retrospective learning; Review incident response plans for alignment with relevant breach reporting, data protection, data privacy, or other laws.
2.3.4 Incident ResponseEstablish policies and procedures for continuous monitoring of third-party GAI systems in deployment.
2.1.3 Policies & ProceduresEstablish policies and procedures that address GAI data redundancy, including model weights and other system artifacts.
2.1.3 Policies & ProceduresEstablish policies and procedures to test and manage risks related to rollover and fallback technologies for GAI systems, acknowledging that rollover and fallback may include manual processing.
2.1.3 Policies & ProceduresReview vendor contracts and avoid arbitrary or capricious termination of critical GAI technologies or vendor services and non-standard terms that may amplify or defer liability in unexpected ways and/or contribute to unauthorized data collection by vendors or third-parties (e.g., secondary data use). Consider: Clear assignment of liability and responsibility for incidents, GAI system changes over time (e.g., fine-tuning, drift, decay); Request: Notification and disclosure for serious incidents arising from third-party data and systems; Service Level Agreements (SLAs) in vendor contracts that address incident response, response times, and availability of critical support.
2.1.3 Policies & ProceduresLegal and regulatory requirements involving AI are understood, managed, and documented.
2.1.3 Policies & ProceduresLegal and regulatory requirements involving AI are understood, managed, and documented. > Align GAI development and use with applicable laws and regulations, including those related to data privacy, copyright and intellectual property law.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish transparency policies and processes for documenting the origin and history of training data and generated data for GAI applications to advance digital content transparency, while balancing the proprietary nature of training approaches.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish policies to evaluate risk-relevant capabilities of GAI and robustness of safety measures, both prior to deployment and on an ongoing basis, through internal and external evaluations.
2.1.3 Policies & ProceduresProcesses, procedures, and practices are in place to determine the needed level of risk management activities based on the organization’s risk tolerance.
2.1.3 Policies & ProceduresArtificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1)
US National Institute of Standards and Technology (NIST) (2024)
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, 1 pursuant to President Biden’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.2 The AI RMF was released in January 2023, and is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Govern
Policies, processes, and accountability structures for AI risk management