This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Runtime monitoring, observability, performance tracking, and anomaly detection in production.
Also in Operations & Security
Reasoning
Organizational mechanisms sustain deployed system value through runtime monitoring and maintenance practices.
Compare GAI system outputs against pre-defined organization risk tolerance, guidelines, and principles, and review and test AI-generated content against these guidelines.
2.2.2 Testing & EvaluationDocument training data sources to trace the origin and provenance of AIgenerated content
1.2.5 Provenance & WatermarkingEvaluate feedback loops between GAI system content provenance and human reviewers, and update where needed. Implement real-time monitoring systems to affirm that content provenance protocols remain effective.
2.3.3 Monitoring & LoggingEvaluate GAI content and data for representational biases and employ techniques such as re-sampling, re-ranking, or adversarial training to mitigate biases in the generated content.
1.1.2 Learning ObjectivesEngage in due diligence to analyze GAI output for harmful content, potential misinformation, and CBRN-related or NCII content.
2.2.2 Testing & EvaluationUse feedback from internal and external AI Actors, users, individuals, and communities, to assess impact of AI-generated content.
2.2.1 Risk AssessmentUse real-time auditing tools where they can be demonstrated to aid in the tracking and validation of the lineage and authenticity of AI-generated data.
1.2.5 Provenance & WatermarkingUse structured feedback mechanisms to solicit and capture user input about AIgenerated content to detect subtle shifts in quality or alignment with community and societal values.
2.3.3 Monitoring & LoggingConsider opportunities to responsibly use synthetic data and other privacy enhancing techniques in GAI development, where appropriate and applicable, match the statistical properties of real-world data without disclosing personally identifiable information or contributing to homogenization.
1.1.1 Training DataLegal and regulatory requirements involving AI are understood, managed, and documented.
2.1.3 Policies & ProceduresLegal and regulatory requirements involving AI are understood, managed, and documented. > Align GAI development and use with applicable laws and regulations, including those related to data privacy, copyright and intellectual property law.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish transparency policies and processes for documenting the origin and history of training data and generated data for GAI applications to advance digital content transparency, while balancing the proprietary nature of training approaches.
2.1.3 Policies & ProceduresThe characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices. > Establish policies to evaluate risk-relevant capabilities of GAI and robustness of safety measures, both prior to deployment and on an ongoing basis, through internal and external evaluations.
2.1.3 Policies & ProceduresProcesses, procedures, and practices are in place to determine the needed level of risk management activities based on the organization’s risk tolerance.
2.1.3 Policies & ProceduresArtificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1)
US National Institute of Standards and Technology (NIST) (2024)
This document is a cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, 1 pursuant to President Biden’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence.2 The AI RMF was released in January 2023, and is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks
Other