Requires U.S. persons to report quarterly on AI model training exceeding 10^26 operations or developing computing clusters over specific thresholds. Mandates responses to BIS inquiries within set timelines, including those related to dual-use model safety and security measures.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding federal regulation with mandatory reporting requirements, specific enforcement mechanisms, and legal obligations imposed on covered U.S. persons by the Department of Commerce under statutory authority.
The document has minimal to good coverage of approximately 6-8 subdomains, with primary focus on AI system security (2.2), malicious actors and weapons development (4.2), competitive dynamics (6.4), governance failure (6.5), and AI safety failures (7.1, 7.2, 7.3). Coverage is concentrated in security, dual-use risks, and oversight mechanisms.
This regulation applies broadly across all sectors where covered U.S. persons develop advanced AI models or computing clusters meeting specified thresholds. The governance is sector-agnostic, focusing on technical capabilities rather than industry-specific applications. However, the definition of dual-use foundation models references capabilities relevant to national security, cybersecurity, and CBRN weapons, suggesting particular relevance to defense, information technology, and scientific research sectors.
The document primarily covers the Build and Use Model stage through training run reporting requirements, and the Operate and Monitor stage through ongoing quarterly notifications and red-team testing requirements. It also addresses Deploy stage through computing cluster acquisition reporting.
The document explicitly defines and covers AI models, AI systems, and dual-use foundation models with specific technical thresholds. It establishes a compute threshold of 10^26 operations for training runs and 10^20 OP/s for computing clusters. It does not explicitly mention frontier AI, general purpose AI, task-specific AI, generative AI, predictive AI, or open-weight models.
Department of Commerce, Bureau of Industry and Security (BIS), Assistant Secretary for Export Administration (Thea D. Rozman Kendler)
The document is a proposed amendment to federal regulations issued by the Department of Commerce through the Bureau of Industry and Security, with authority derived from federal statute and executive orders.
Bureau of Industry and Security (BIS), Department of Commerce
BIS has explicit authority to receive notifications, send questions, require responses within specified timelines, demand corrections, and request clarifications from covered U.S. persons.
Bureau of Industry and Security (BIS), Department of Commerce
BIS monitors compliance through quarterly notifications, reviews responses to questions, tracks affirmations of no applicable activities, and has authority to request clarifications and corrections.
Covered U.S. persons (including U.S. citizens, lawful permanent residents, entities organized under U.S. laws, and persons located in the United States) who engage in training advanced AI models or developing large-scale computing clusters
The regulation explicitly targets 'covered U.S. persons' who conduct AI model training runs exceeding 10^26 operations or develop computing clusters meeting specified thresholds. This includes corporations, partnerships, academic institutions, and research centers.
9 subdomains (4 Good, 5 Minimal)