Requires developers of frontier AI models to implement comprehensive safety and security protocols and detailed risk assessments. Requires annual audits, mandates incident reporting within 72 hours, creates a regulatory board and introduces plans for a public cloud computing cluster called CalCompute.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative act (SB 1047) with mandatory obligations, civil penalties for violations, enforcement by the Attorney General, and detailed compliance requirements including audits, reporting, and safety protocols.
The document has good coverage of approximately 10-12 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), AI system security (2.2), competitive dynamics (6.4), governance failure (6.5), and AI safety failures (7.1, 7.2, 7.3). Coverage is concentrated in security, misuse prevention, and AI safety domains, with particular emphasis on preventing critical harms from frontier AI models.
This legislation primarily governs the Information sector (AI developers, cloud computing providers, data processing) and Scientific Research and Development Services sector (AI research institutions, universities conducting AI research). It also has implications for the Professional and Technical Services sector through third-party auditing requirements. The CalCompute initiative specifically targets academic research institutions within Educational Services.
The document comprehensively covers multiple AI lifecycle stages with primary focus on Build and Use Model, Verify and Validate, Deploy, and Operate and Monitor stages. It addresses planning through safety protocols, model development through compute thresholds, extensive testing and validation requirements, deployment restrictions, and ongoing monitoring obligations.
The document explicitly focuses on frontier AI models defined by compute thresholds (10^26 FLOPs for training, 3×10^25 FLOPs for fine-tuning). It addresses both AI models and AI systems, covers model derivatives and fine-tuning, and includes provisions for computing clusters. The document does not explicitly distinguish between general-purpose AI, task-specific AI, foundation models, generative AI, or predictive AI, but the compute thresholds and capabilities described suggest coverage of large-scale general-purpose models.
California State Legislature
This is a legislative act (SB 1047) proposed and enacted by the California State Legislature, as indicated by the bill number and legislative findings.
Attorney General of California, Labor Commissioner, Board of Frontier Models, Government Operations Agency
The Attorney General has primary enforcement authority with power to bring civil actions and impose penalties. The Labor Commissioner has enforcement authority for whistleblower protections. The Board of Frontier Models and Government Operations Agency have regulatory and oversight responsibilities.
Board of Frontier Models, Government Operations Agency, Attorney General, third-party auditors
The Board of Frontier Models oversees regulatory updates and guidance. The Government Operations Agency issues regulations and guidance. The Attorney General receives compliance statements, audit reports, and incident reports. Third-party auditors conduct annual compliance audits.
Developers of frontier AI models (covered models), operators of computing clusters
The act explicitly targets developers of covered models (frontier AI models meeting specific compute thresholds) and operators of computing clusters. These are defined as entities that train AI models using specified amounts of computing power or operate data centers capable of training such models.
13 subdomains (8 Good, 5 Minimal)