Amends the general business law to establish the Responsible AI Safety and Education Act (RAISE Act), compelling transparency, safety protocols, and mandatory disclosures for frontier AI models. Enables state enforcement through penalties, prohibiting deployments that pose critical harm risks. Applies to frontier models in New York.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative act from the New York State Assembly that establishes mandatory requirements with civil penalties for non-compliance and enforcement by the Attorney General.
The document has good coverage of approximately 8-10 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), AI system security (2.2), competitive dynamics (6.4), governance (6.5), and AI safety failures (7.1, 7.2, 7.3). Coverage is concentrated in security, misuse prevention, and AI safety domains.
This document does not govern AI use within specific economic sectors. Rather, it regulates large developers of frontier AI models regardless of the sector in which they operate. The regulation applies to the AI development industry itself (Information sector and Scientific Research and Development Services) but does not impose sector-specific requirements on AI deployment in healthcare, finance, or other application domains.
The document primarily focuses on the Deploy and Operate and Monitor stages of the AI lifecycle, with some coverage of Build and Use Model and Verify and Validate stages. It establishes requirements for pre-deployment safety protocols, testing, and post-deployment monitoring and incident reporting for frontier AI models.
The document explicitly defines and covers AI models, artificial intelligence systems, and frontier AI models with specific compute thresholds (10^26 operations, $100M+ compute cost). It does not explicitly mention general purpose AI, task-specific AI, foundation models, generative AI, predictive AI, or open-weight models. Compute thresholds are central to the regulatory framework.
New York State Assembly Members: M. of A. BORES, LASHER, SEAWRIGHT, PAULIN, TAPIA, RAGA, SHIMSKY, REYES, EPSTEIN, BURKE, HEVESI, P. CARROLL, ZACCARO, HYNDMAN, LUPARDO, KASSAY, LEE, DAVILA, SCHIAVONI, LUNSFORD, K. BROWN, TANNOUSIS, TORRES, HOOKS, GIBBS, ROMERO, COLTON, CONRAD, MEEKS, GLICK, CRUZ, CUNNINGHAM, FORREST, CHANDLER-WATERMAN, STIRPE, WRIGHT, SIMON, DAIS, JENSEN, ROZIC, GONZALEZ-ROJAS
The document is a New York State Assembly bill introduced by multiple Assembly members and referred to various committees for consideration.
New York Attorney General and Division of Homeland Security and Emergency Services
The Attorney General is explicitly granted enforcement authority to bring civil actions and impose penalties, while the Division of Homeland Security and Emergency Services receives safety incident disclosures and has access to safety protocols.
New York Attorney General and Division of Homeland Security and Emergency Services
The same entities that enforce also monitor through receiving safety and security protocols, safety incident reports, and having access to unredacted documentation for oversight purposes.
Large developers of frontier AI models (defined as persons that have trained at least one frontier model and spent over $100 million in compute costs), excluding accredited colleges and universities engaged in academic research
The act specifically targets 'large developers' who train frontier models with specific compute thresholds and cost requirements, imposing transparency, safety, and disclosure obligations on these entities.
12 subdomains (7 Good, 5 Minimal)