Requires the Federal Trade Commission to mandate impact assessments for automated decision systems and augmented critical decision processes. Obligates entities to document and submit these assessments, consult stakeholders, and report findings while identifying and mitigating risks. Establishes a Bureau of Technology and supports enforcement by States.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative act proposed by the United States Congress that establishes mandatory requirements for covered entities to perform impact assessments of automated decision systems, with explicit enforcement mechanisms through the Federal Trade Commission and state attorneys general, including penalties for non-compliance.
The document has good coverage of approximately 8-10 subdomains, with strong focus on unfair discrimination (1.1, 1.3), privacy compromise (2.1), AI system security (2.2), lack of transparency (7.4), and lack of robustness (7.3). Coverage is concentrated in discrimination/toxicity, privacy/security, and AI system safety domains, with minimal coverage of malicious actors, misinformation, or socioeconomic risks.
The document governs AI use across multiple sectors through its definition of 'critical decisions' which explicitly covers education, employment, utilities, family planning, financial services, healthcare, housing, and legal services. The regulation applies horizontally across these sectors wherever automated decision systems are deployed for critical decisions affecting consumers.
The document comprehensively covers all stages of the AI lifecycle, with particularly strong emphasis on Verify and Validate, Deploy, and Operate and Monitor stages. It requires impact assessments throughout development and deployment, including pre-deployment evaluation and ongoing post-deployment monitoring.
The document focuses on 'automated decision systems' and 'augmented critical decision processes' as its primary technical scope. It does not explicitly mention frontier AI, general purpose AI, foundation models, generative AI, predictive AI, or compute thresholds. The scope is defined functionally around systems that make or assist in critical decisions affecting consumers.
United States Congress; Senator Ron Wyden; Senator Cory Booker; Senator Martin Heinrich; Senator Gary Peters; Senator Bob Casey; Senator Ben Ray Luján; Senator Tammy Baldwin; Senator Jeff Merkley; Senator Sheldon Whitehouse; Senator Brian Schatz; Senator Mazie Hirono; Senator Elizabeth Warren
The document is a bill introduced in the United States Senate by Senator Wyden and co-sponsored by multiple senators, as indicated in the header and introduction section.
Federal Trade Commission; State Attorneys General
The Federal Trade Commission is designated as the primary enforcement authority with power to promulgate regulations and enforce compliance. State attorneys general are also granted enforcement authority through parens patriae actions.
Federal Trade Commission; Bureau of Technology; National Institute of Standards and Technology; Office of Science and Technology Policy
The FTC is responsible for monitoring through annual summary reports from covered entities and maintaining a publicly accessible repository. A new Bureau of Technology is established within the FTC to aid in monitoring technological aspects. NIST and OSTP have review roles for developing future standards.
The Act targets 'covered entities' which are defined as persons, partnerships, or corporations that deploy augmented critical decision processes or develop automated decision systems for such processes, meeting specific revenue or data thresholds. These entities function as both AI developers and deployers.
7 subdomains (5 Good, 2 Minimal)