Establishes voluntary guidelines for AI assurance, requiring testing and validation to build trust and accountability. Involves stakeholder input, sets qualifications for evaluators, and mandates a study on assurance capabilities. Aligns with existing AI risk management frameworks.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a legislative act that establishes voluntary guidelines and specifications for AI assurance. While it is enacted by Congress (hard law authority), the substantive obligations it creates are explicitly voluntary in nature, with no mandatory compliance requirements or enforcement mechanisms for AI developers/deployers.
The document has minimal to good coverage of approximately 6-8 subdomains, with primary focus on AI system safety and failures (7.3, 7.4), governance structures (6.5), privacy concerns (2.1), discrimination risks (1.1, 1.3), and security vulnerabilities (2.2). Coverage is concentrated in technical assurance, testing, and validation domains rather than malicious use or socioeconomic impacts.
This is a cross-sectoral framework that does not govern specific economic sectors. Instead, it establishes voluntary guidelines for AI assurance applicable to any developer or deployer of AI systems across all sectors. The Advisory Committee includes representatives from multiple sectors (healthcare, public safety, etc.) but this reflects stakeholder input rather than sector-specific regulation.
The document comprehensively covers the Verify and Validate stage through its focus on testing, evaluation, validation, and verification of AI systems. It also addresses Deploy and Operate and Monitor stages through guidance on assurance frequency, disclosure of results, and corrective actions. The Plan and Design stage receives minimal coverage through references to risk management and governance processes.
The document explicitly mentions 'artificial intelligence systems' throughout and provides a formal definition referencing existing federal AI legislation. It does not specifically mention frontier AI, general purpose AI, foundation models, generative AI, predictive AI, open-weight models, or compute thresholds. The focus is on AI systems broadly defined without distinguishing between specific AI types or capabilities.
United States Congress
The document is titled as an Act of Congress and follows standard legislative format. It is proposed and enacted by the U.S. Congress as federal legislation.
The Act establishes voluntary guidelines with no enforcement mechanisms for AI developers or deployers. The only mandatory obligations are on government agencies (NIST, Department of Commerce) to develop guidelines and conduct studies, not to enforce compliance by private entities.
National Institute of Standards and Technology (NIST), Department of Commerce, Artificial Intelligence Assurance Qualifications Advisory Committee
NIST is tasked with developing and periodically updating voluntary guidelines. The Secretary of Commerce must conduct a study to evaluate the capabilities of entities conducting AI assurances. An Advisory Committee is established to review and make recommendations.
Developers and deployers of artificial intelligence systems (not specifically named)
The Act defines and targets both 'developers' (entities that build, design, code, produce, train, or own AI systems) and 'deployers' (entities that operate AI systems). The voluntary guidelines are designed for these entities to conduct internal and external assurances.
8 subdomains (3 Good, 5 Minimal)