Requires Homeland Security, via FEMA, to develop AI testing centers for civilian agencies, ensuring rights protection and democratic principles. Establishes an AI Incident Reporting Office. Mandates biannual reports to Congress, and authorizes $20 billion for implementation.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative bill with mandatory language throughout, establishing legal obligations for federal agencies with specific enforcement mechanisms, reporting requirements, and appropriated funding.
The document has good coverage of approximately 8-10 subdomains, with strong focus on governance (6.5), privacy (2.1), discrimination and rights protection (1.1, 1.3), human agency (5.2), AI system robustness (7.3), and transparency (7.4). Coverage is concentrated in governance structures, rights protection, and AI system testing domains.
This document primarily governs Public Administration (excluding National Security) as it establishes requirements for Federal civilian agencies' acquisition and use of AI systems. It does not regulate private sector activities but rather internal government operations across all civilian federal agencies.
The document primarily focuses on the Verify and Validate, Deploy, and Operate and Monitor stages. It establishes testing and certification centers for AI systems before federal acquisition (validation), governs deployment decisions through scoring systems, and requires ongoing monitoring through incident reporting.
The document explicitly mentions AI systems and generative AI. It focuses on testing and certification of AI systems for federal civilian agency use, with specific mention of generative AI training engines and test beds. No specific compute thresholds, model types (foundation, open-weight), or distinctions between general purpose and task-specific AI are mentioned.
United States Congress
The document is explicitly identified as a Congressional bill with the opening 'Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled'
Department of Homeland Security; Federal Emergency Management Agency; Secretary of Homeland Security
The Secretary of Homeland Security, acting through FEMA, is designated as the primary enforcement authority responsible for developing testing centers, establishing the incident reporting office, and ensuring compliance with requirements.
United States Congress; Department of Homeland Security; Office of Artificial Intelligence Incident Reporting
Congress receives biannual reports on implementation. The Office of Artificial Intelligence Incident Reporting monitors and collects information on AI system experiences across federal agencies.
Federal civilian agencies; Department of Homeland Security; Federal Emergency Management Agency
The bill targets Federal civilian agencies that will acquire and use AI systems, requiring them to utilize testing centers and report adverse experiences. It also targets AI developers whose systems will be tested and scored for federal use.
10 subdomains (6 Good, 4 Minimal)