Establishes a licensing regime for AI developers, requiring risk management and compliance with oversight audits. Holds AI companies accountable for harms, including privacy violations and deepfake creation. Limits AI technology exports to adversaries. Demands transparency and consumer protection, especially for children.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a proposed legislative framework that has not yet been enacted into law. It uses prescriptive language ('should be required') indicating intended future hard law, but currently represents a policy proposal or blueprint rather than binding legislation.
The document has good coverage of approximately 12-14 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), privacy compromise (2.1), AI system security (2.2), false information (3.1), governance failure (6.5), competitive dynamics (6.4), lack of transparency (7.4), and lack of robustness (7.3). Coverage is concentrated in security, misuse prevention, transparency, and governance domains.
This is an external regulatory framework that applies broadly across multiple sectors. It explicitly governs AI use in high-risk situations including facial recognition, and addresses sector-specific concerns in national security, information/technology, and implicitly covers any sector deploying AI in consequential decision-making contexts. The framework is cross-sectoral in nature.
The document covers multiple lifecycle stages with primary focus on deployment, operation and monitoring. It addresses pre-deployment testing and risk management (Build and Use Model, Verify and Validate), deployment requirements including registration and licensing (Deploy), and ongoing monitoring and incident reporting (Operate and Monitor). Planning and design stage is implicitly covered through risk management requirements.
The document explicitly mentions AI models and AI systems throughout. It specifically references general-purpose AI models (e.g., GPT-4) and generative AI. It does not explicitly mention frontier AI, foundation models, task-specific AI, predictive AI, open-weight models, or specific compute thresholds.
Not explicitly named - appears to be a bipartisan legislative proposal
The document is titled 'Bipartisan Framework for U.S. AI Act' and consistently addresses recommendations to Congress, suggesting it was proposed by legislators or policy groups to Congress.
Independent oversight body (proposed), state Attorneys General
The framework proposes creation of an independent oversight body with enforcement authority, including auditing powers and cooperation with state Attorneys General who would have concurrent enforcement authority.
Independent oversight body (proposed)
The proposed independent oversight body would have monitoring responsibilities including tracking technological developments, economic impacts, maintaining a public database, and reporting on AI incidents and harms.
Companies developing sophisticated general-purpose A.I. models (e.g., GPT-4), companies deploying A.I. in high-risk or consequential situations, A.I. system providers
The framework explicitly targets both AI developers (those creating models) and AI deployers (those using AI systems in various contexts), with specific requirements for each group.
17 subdomains (2 Good, 15 Minimal)