Defines trustworthy AI as valid, reliable, safe, secure, accountable, transparent, explainable, privacy-enhanced, and fair, with managed harmful bias. Outlines an AI risk management development strategy through governance, mapping, measuring, and managing functions.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a voluntary framework with non-binding guidance and recommendations. The document explicitly states it is voluntary and uses predominantly voluntary language ('should', 'can', 'may') with no enforcement mechanisms or legal penalties.
The document has good coverage of approximately 15-17 subdomains, with strong focus on AI system safety failures (7.1, 7.2, 7.3, 7.4), privacy and security (2.1, 2.2), discrimination and fairness (1.1, 1.3), governance (6.5), and human-computer interaction (5.1, 5.2). Coverage is concentrated in technical AI safety, trustworthiness characteristics, and governance frameworks.
This framework is sector-agnostic and applies broadly across all industries. It explicitly mentions healthcare and transportation as examples of safety-critical applications, but provides governance guidance applicable to any sector deploying AI systems. The framework is designed to be adapted to different sectoral contexts.
The document comprehensively covers all AI lifecycle stages from planning through monitoring. It explicitly structures its core framework around lifecycle stages with detailed guidance for Plan and Design (MAP function), Data Collection (MAP 2.3), Model Building (MEASURE function), Verification and Validation (MEASURE 2), Deployment (MANAGE 1.1), and Operation and Monitoring (MANAGE 4.1).
The document extensively covers AI models and AI systems as core concepts throughout. It does not explicitly mention frontier AI, general purpose AI, task-specific AI, foundation models, or compute thresholds. It implicitly covers both generative and predictive AI through references to different model types. Open-weight models are not explicitly mentioned.
National Institute of Standards and Technology (NIST)
NIST is the authoring organization of this framework, as indicated in the document title and throughout the text. NIST is a U.S. government agency that develops standards and guidelines.
The framework is voluntary and does not establish enforcement mechanisms or designate enforcement bodies. Organizations self-implement based on their own risk management needs.
The framework places monitoring responsibilities on the organizations themselves that develop and deploy AI systems. Internal monitoring, assessment, and review processes are described throughout, with organizations responsible for their own compliance.
The framework targets organizations that design, develop, deploy, evaluate, or acquire AI systems. The document repeatedly references 'AI actors' including developers, deployers, and organizations throughout the AI lifecycle.
12 subdomains (9 Good, 3 Minimal)