Requires the National Academy of Sciences to establish a grant program for the development of safe AI models and AI research; directs and outlines the submission of a proposal for the program in accordance with guiding principles.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a proposed federal statute (H.R. 6088) introduced in the U.S. House of Representatives with binding legal language requiring the National Academy of Sciences to establish a grant program. The document uses mandatory language ('shall') throughout and would create legally enforceable obligations upon enactment.
The document has minimal coverage of specific risk subdomains. It broadly references AI safety and risk mitigation without detailing specific harms. The most relevant coverage relates to AI system safety failures (7.1, 7.2, 7.3) through its focus on developing 'safe AI models' and research into 'safety and risk reduction.' There are implicit references to governance (6.5) through the establishment of guiding principles and grant program structure. However, the document remains high-level and does not explicitly address most specific risk categories in the taxonomy.
This document does not govern specific economic sectors. Instead, it establishes a research grant program administered by the National Academy of Sciences to develop safe AI models across all sectors. The document mentions consultation with 'industry, government, academia, and high-tech stakeholders' but does not regulate AI use in any particular sector.
The document covers multiple AI lifecycle stages with primary focus on Plan and Design (through development of guiding principles), Build and Use Model (through safe AI model development grants), and Verify and Validate (through evaluation of safety incorporation and risk assessment methodologies). It also addresses Operate and Monitor through evaluation of grant outcomes.
The document explicitly mentions 'AI models' multiple times and references 'safe AI' throughout. It does not specifically mention frontier AI, general purpose AI, foundation models, generative AI, predictive AI, open-weight models, or compute thresholds. The focus is on AI models broadly without technical categorization.
Mr. Kiley; Ms. Garcia of Texas; United States Congress; Committee on Science, Space, and Technology
The bill was introduced by Representatives Kiley and Garcia in the U.S. House of Representatives, making Congress the proposing authority. The document header explicitly identifies the congressional sponsors and the committee of referral.
Committee on Science, Space, and Technology of the House of Representatives; Committee on Commerce, Science, and Transportation of the Senate; National Academy of Sciences
Congressional committees serve as the primary enforcement mechanism through oversight and review of the required proposal. The National Academy of Sciences has enforcement authority over grant administration and evaluation of awardees.
National Academy of Sciences; Committee on Science, Space, and Technology; Committee on Commerce, Science, and Transportation
The National Academy of Sciences is responsible for monitoring grant implementation and evaluating outcomes. Congressional committees monitor compliance through the required proposal submission and ongoing oversight.
National Academy of Sciences; industry; government; academia; high-tech stakeholders; prospective grantees
The primary target is the National Academy of Sciences, which is mandated to establish the grant program. Secondary targets include AI developers and researchers who would participate in the grant program, as well as stakeholders consulted in developing guiding principles.