This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Definition of roles, teams, and responsibility assignments for AI governance.
Also in Oversight & Accountability
Reasoning
Documents role definitions, responsibility assignments, and communication lines for AI risk management across the organization.
Establish organizational roles, policies, and procedures for communicating GAI incidents and performance to AI Actors and downstream stakeholders (including those potentially impacted), via community or official resources (e.g., AI incident database, AVID, CVE, NVD, or OECD AI incident monitor).
2.1.3 Policies & ProceduresEstablish procedures to engage teams for GAI system incident response with diverse composition and responsibilities based on the particular incident type.
2.3.4 Incident ResponseEstablish processes to verify the AI Actors conducting GAI incident response tasks demonstrate and maintain the appropriate skills and training.
2.4.4 Training & AwarenessWhen systems may raise national security risks, involve national security professionals in mapping, measuring, and managing those risks.
2.1.2 Roles & AccountabilityCreate mechanisms to provide protections for whistleblowers who report, based on reasonable belief, when the organization violates relevant laws or poses a specific and empirically well-substantiated negative risk to public safety (or has already caused harm).
2.1.3 Policies & ProceduresLegal and regulatory requirements involving AI are understood, managed, and documented.
2.1.3 Policies & ProceduresLegal and regulatory requirements involving AI are understood, managed, and documented. > Align GAI development and use with applicable laws and regulations, including those related to data privacy, copyright and intellectual property law.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish transparency policies and processes for documenting the origin and history of training data and generated data for GAI applications to advance digital content transparency, while balancing the proprietary nature of training approaches.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish policies to evaluate risk-relevant capabilities of GAI and robustness of safety measures, both prior to deployment and on an ongoing basis, through internal and external evaluations.
2.1.3 Policies & ProceduresProcesses, procedures, and practices are in place to determine the needed level of risk management activities based on the organization’s risk tolerance.
2.1.3 Policies & ProceduresArtificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1)
US National Institute of Standards and Technology (NIST) (2024)
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, 1 pursuant to President Biden’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.2 The AI RMF was released in January 2023, and is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure