This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Reasoning
Documents intended purposes, users, impacts, and assumptions to identify and characterize potential risks pre-deployment.
When identifying intended purposes, consider factors such as internal vs. external use, narrow vs. broad application scope, fine-tuning, and varieties of data sources (e.g., grounding, retrieval-augmented generation).
2.2.1 Risk AssessmentDetermine and document the expected and acceptable GAI system context of use in collaboration with socio-cultural and other domain experts, by assessing: Assumptions and limitations; Direct value to the organization; Intended operational environment and observed usage patterns; Potential positive and negative impacts to individuals, public safety, groups, communities, organizations, democratic institutions, and the physical environment; Social norms and expectations.
2.2.1 Risk AssessmentDocument risk measurement plans to address identified risks. Plans may include, as applicable: Individual and group cognitive biases (e.g., confirmation bias, funding bias, groupthink) for AI Actors involved in the design, implementation, and use of GAI systems; Known past GAI system incidents and failure modes; In-context use and foreseeable misuse, abuse, and off-label use; Over reliance on quantitative metrics and methodologies without sufficient awareness of their limitations in the context(s) of use; Standard measurement and structured human feedback approaches; Anticipated human-AI configurations.
2.2.4 Assurance DocumentationIdentify and document foreseeable illegal uses or applications of the GAI system that surpass organizational risk tolerances.
2.2.1 Risk AssessmentLegal and regulatory requirements involving AI are understood, managed, and documented.
2.1.3 Policies & ProceduresLegal and regulatory requirements involving AI are understood, managed, and documented. > Align GAI development and use with applicable laws and regulations, including those related to data privacy, copyright and intellectual property law.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish transparency policies and processes for documenting the origin and history of training data and generated data for GAI applications to advance digital content transparency, while balancing the proprietary nature of training approaches.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish policies to evaluate risk-relevant capabilities of GAI and robustness of safety measures, both prior to deployment and on an ongoing basis, through internal and external evaluations.
2.1.3 Policies & ProceduresProcesses, procedures, and practices are in place to determine the needed level of risk management activities based on the organization’s risk tolerance.
2.1.3 Policies & ProceduresArtificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1)
US National Institute of Standards and Technology (NIST) (2024)
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, 1 pursuant to President Biden’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.2 The AI RMF was released in January 2023, and is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Plan and Design
Designing the AI system, defining requirements, and planning development
Developer
Entity that creates, trains, or modifies the AI system
Map
Identifying and documenting AI risks, contexts, and impacts