Establishes governance requirements for Federal AI systems to ensure actions align with laws, promote fairness, manage risks, and maintain transparency. Requires the development of AI charters and oversight, with provisions for public disclosure, privacy protection, and periodic evaluations.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding federal statute enacted by Congress with mandatory requirements, enforcement mechanisms through the Director of OMB and agency Inspectors General, and legal penalties for non-compliance.
The document has good coverage of approximately 8-10 subdomains, with strong focus on discrimination and fairness (1.1, 1.3), privacy protection (2.1), AI system security (2.2), misinformation risks (3.1), governance structures (6.5), and AI system safety failures (7.1, 7.3, 7.4). Coverage is concentrated in fairness, transparency, accountability, and system reliability domains.
This document governs AI use across all Federal government agencies and operations, primarily covering the Public Administration sector. It applies to all federal agencies using AI systems, with exemptions for national security systems. The governance framework extends to contractors providing AI systems to the federal government.
The document comprehensively covers all stages of the AI lifecycle with particular emphasis on deployment, operation and monitoring. It addresses planning and design through governance charter requirements, data collection and processing through training data documentation, model building through development oversight, verification and validation through testing requirements, deployment through notification processes, and ongoing operation through monitoring and evaluation mandates.
The document explicitly defines and covers 'artificial intelligence' and 'artificial intelligence systems' broadly. It does not specifically mention frontier AI, general purpose AI, task-specific AI, foundation models, generative AI, predictive AI, open-weight models, or compute thresholds. The focus is on Federal AI systems regardless of their technical characteristics.
United States Congress; Mr. Comer; Mr. Raskin; Ms. Mace; Ms. Ocasio-Cortez; Mr. Higgins of Louisiana; Mr. Connolly; Mr. Langworthy; Mr. Khanna; Committee on Oversight and Accountability
The bill was introduced in the House of Representatives by multiple congressional representatives and referred to the Committee on Oversight and Accountability, indicating Congress as the proposer of this governance framework.
Director of the Office of Management and Budget; Inspector General; Administrator of General Services; Director of the National Institute of Standards and Technology; Director of the Office of Science and Technology Policy; Director of the Office of Personnel Management
The Director of OMB has primary enforcement authority with explicit powers to oversee compliance and take enforcement actions. Inspectors General conduct independent evaluations, and the Comptroller General provides periodic oversight.
Inspector General; Comptroller General; appropriate congressional committees; Committee on Oversight and Accountability of the House of Representatives; Committee on Homeland Security and Governmental Affairs of the Senate
Inspectors General are required to perform independent evaluations every 2 years, the Comptroller General periodically evaluates implementation, and congressional committees receive reports on compliance and effectiveness.
Federal agencies; agency Chief Information Officer; Chief Data Officer; senior agency official for privacy; contractors or subcontractors with the Federal Government
The document explicitly targets federal agencies and their officials responsible for AI systems, as well as contractors who build, provide, operate, or maintain federal AI systems.
11 subdomains (9 Good, 2 Minimal)