Establishes the "Artificial Intelligence Safety Institute" to assist sectors in AI best practices and innovation. Requires AI research and standards development and mandates an international coalition for AI standards. Directs reporting on regulatory barriers and orders prioritized AI training data and Federal AI challenges.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative act from the United States Congress with mandatory language establishing institutes, requiring reports, and directing federal agencies to take specific actions. While many provisions relate to voluntary standards, the Act itself contains enforceable obligations on federal agencies.
The document has minimal coverage of specific risk domains, with most focus on governance structures and research infrastructure rather than explicit risks. Coverage is primarily concentrated in governance failure (6.5) with implicit references to system safety (7.3, 7.4). The document emphasizes voluntary standards, testing, and international cooperation rather than addressing specific harms or risks described in the MIT taxonomy.
This Act primarily governs federal government operations and cross-sector AI research and development infrastructure. It does not regulate specific economic sectors but rather establishes voluntary standards, testing capabilities, and research programs that could apply across multiple sectors. Specific sector mentions include manufacturing, maritime, border security, and materials science, but these are referenced as application areas for AI research challenges rather than sectors being regulated.
The document covers multiple AI lifecycle stages with primary emphasis on Build and Use Model, Verify and Validate, and Operate and Monitor stages. It establishes comprehensive testing, evaluation, and standards development infrastructure. The Plan and Design stage is addressed through standards development and best practices. Data collection is covered through public dataset priorities.
The document explicitly mentions AI models, AI systems, foundation models, and generative AI with detailed definitions. It does not explicitly mention frontier AI, general purpose AI, task-specific AI, predictive AI, or open-weight models. Compute thresholds are not mentioned. The focus is on broad AI system categories rather than specific model types or capability thresholds.
United States Congress
The document is an Act of Congress, as indicated by the legislative format, section structure, and references to Congressional committees. It was proposed and enacted by the United States Congress.
Under Secretary of Commerce for Standards and Technology; Director of the Artificial Intelligence Safety Institute; Secretary of Commerce; Secretary of Energy; Director of the National Science Foundation; Director of the Office of Science and Technology Policy; Comptroller General of the United States
The Act designates specific federal officials and agency heads with authority to implement, oversee, and ensure compliance with the Act's provisions. These officials are responsible for establishing institutes, developing standards, and submitting required reports to Congress.
Congress; Committee on Commerce, Science, and Transportation of the Senate; Committee on Science, Space, and Technology of the House of Representatives; Committee on Energy and Natural Resources of the Senate; Comptroller General of the United States; Artificial Intelligence Safety Institute
Congressional committees receive required reports and conduct oversight. The Comptroller General is tasked with evaluating regulatory impediments and agency challenges. The AI Safety Institute conducts ongoing testing, evaluation, and monitoring of AI systems.
National Institute of Standards and Technology; Department of Energy; National Science Foundation; Federal agencies; private sector entities; companies of all sizes; developers of foundation models; National Laboratories
The Act primarily targets federal agencies with mandatory obligations to establish institutes, develop standards, and conduct research. It also targets private sector AI developers and deployers through voluntary participation in testing, standards development, and public-private partnerships.
8 subdomains (8 Minimal)