Establishes a voluntary database to track and process identified AI security and safety incidents and risks, and requires the facilitation of research to develop new and evaluate existing safety guidelines and processes.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative act from the United States Congress with mandatory obligations on federal agencies, enforcement mechanisms including whistleblower protections with legal remedies, and specific timelines for implementation.
The document has good coverage of approximately 6-8 subdomains, with strong focus on AI system security (2.2), malicious actors and cyberattacks (4.2), competitive dynamics (6.4), governance failure (6.5), and AI safety failures (7.1, 7.2, 7.3). Coverage is concentrated in security, vulnerability management, and AI safety incident tracking domains.
This is an external regulation that applies broadly across all sectors where AI systems are deployed, with particular emphasis on critical infrastructure and safety-critical systems. The document does not limit its scope to specific industries but rather establishes cross-sectoral governance mechanisms for AI security and safety incident tracking.
The document primarily addresses the Deploy and Operate and Monitor lifecycle stages through its focus on incident tracking, vulnerability management, and post-deployment security. It also covers Build and Use Model through supply chain risk considerations and Verify and Validate through the establishment of research test-beds for security testing.
The document explicitly mentions AI systems and AI models throughout, with particular focus on security vulnerabilities and safety incidents. It does not specifically define or distinguish between frontier AI, general purpose AI, task-specific AI, foundation models, generative AI, or predictive AI. There is no mention of compute thresholds or open-weight/open-source models.
United States Congress (Senate and House of Representatives)
The document is a Congressional bill proposed by the legislative branch of the United States government, as indicated by the opening text and structure.
Director of the National Institute of Standards and Technology, Director of the Cybersecurity and Infrastructure Security Agency, Secretary of Labor, United States district courts, Director of the National Security Agency
Multiple federal agencies and officials are designated with enforcement and implementation authority, including NIST Director for database establishment, CISA Director for vulnerability management, Secretary of Labor for whistleblower complaints, and federal courts for legal remedies.
Director of the National Institute of Standards and Technology, Director of the Cybersecurity and Infrastructure Security Agency, relevant congressional committees (Committee on Homeland Security and Governmental Affairs, Committee on Commerce, Science, and Transportation, Select Committee on Intelligence, Committee on the Judiciary of the Senate; Committee on Oversight and Accountability, Committee on Energy and Commerce, Permanent Select Committee on Intelligence, Committee on the Judiciary of the House of Representatives)
NIST and CISA Directors are responsible for monitoring and tracking AI security and safety incidents through the voluntary database. Congressional committees receive reports on the sufficiency of vulnerability reporting processes.
Private sector entities, public sector organizations, civil society groups, academic researchers, employers of AI workers, Federal agencies, National Security Agency, National Institute of Standards and Technology, Cybersecurity and Infrastructure Security Agency
The Act targets multiple entity types: AI developers and deployers who may report incidents voluntarily, federal agencies with mandatory obligations to establish databases and processes, and employers who are prohibited from retaliating against whistleblowers.
9 subdomains (4 Good, 5 Minimal)