Establishes an Artificial Intelligence Safety Review Office to evaluate AI models for national security risks. Requires AI developers to follow red-teaming and cybersecurity standards. Prohibits deploying risky AI models. Imposes fines and criminal penalties for non-compliance. Allocates $50 million.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding federal statute enacted by Congress with mandatory obligations, enforcement mechanisms including criminal penalties (up to 10 years imprisonment) and civil fines (up to $1,000,000 per day), and judicial enforcement through district courts.
The document has good coverage of approximately 8-10 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), AI system security (2.2), competitive dynamics (6.4), governance structures (6.5), and AI safety failures (7.1, 7.2, 7.3). Coverage is concentrated in security, misuse prevention, and AI safety domains, particularly addressing CBRN and cyber risks.
This is external federal regulation that governs AI development and deployment across all sectors where covered frontier AI models are developed or deployed. The primary sectors directly regulated are Information (AI developers, cloud providers) and Scientific Research and Development Services (AI research organizations). The regulation has cross-sectoral applicability as it governs AI models that could be used in any sector, with particular attention to national security implications.
The document covers multiple AI lifecycle stages with primary focus on Build and Use Model, Verify and Validate, Deploy, and Operate and Monitor stages. It addresses model development through red-teaming and cybersecurity standards, pre-deployment evaluation and review processes, deployment notification and prohibition mechanisms, and ongoing monitoring through reporting requirements.
The document explicitly defines and focuses on 'covered frontier artificial intelligence models' with specific compute thresholds (10^26 operations for training). It addresses both AI models and broader systems, with emphasis on general-purpose, broadly capable models. The document does not explicitly mention foundation models, generative AI, predictive AI, or distinguish between open-weight and closed models, though it does address deployment broadly including open source release.
United States Congress; Senate; House of Representatives
The document is proposed legislation by the United States Congress, as indicated by the standard legislative enactment clause and the legislative format.
Artificial Intelligence Safety Review Office; Under Secretary of Commerce for Artificial Intelligence Safety; Attorney General; Secretary of Commerce; Department of Commerce
The Act establishes the Artificial Intelligence Safety Review Office under the Department of Commerce, led by the Under Secretary of Commerce for Artificial Intelligence Safety, with enforcement authority. The Attorney General is granted judicial enforcement powers through district courts.
Artificial Intelligence Safety Review Office; Under Secretary of Commerce for Artificial Intelligence Safety; Department of Energy; Department of Homeland Security; Department of Health and Human Services; Bureau of Industry and Security; National Institute for Standards and Technology; National Nuclear Security Administration; Cybersecurity and Infrastructure Security Agency; National Security Agency
The Office is responsible for monitoring and evaluation functions, including conducting pre-deployment reviews, evaluating compliance, and coordinating with multiple federal agencies. The Under Secretary conducts biennial studies and submits annual reports to Congress on Office activities.
The Act explicitly targets covered frontier artificial intelligence model developers (those who develop, train, or create covered frontier AI models), sellers of covered integrated circuits, infrastructure-as-a-service providers, and owners of covered data centers. These are AI developers and infrastructure providers as defined in the taxonomy.
10 subdomains (5 Good, 5 Minimal)