Amends U.S. Code to require disclosures and inclusion of content provenance technologies for AI-generated false personation records, with specific disclosure requirements based on the form of the record, and imposes criminal and civil penalties for non-compliance.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding federal statute amending the U.S. Code with mandatory requirements, criminal penalties up to 5 years imprisonment, civil penalties up to $150,000, and enforcement by the Attorney General and federal courts.
The document has good coverage of approximately 8-10 subdomains, with strong focus on malicious actors (4.1, 4.2, 4.3), misinformation (3.1, 3.2), privacy and security (2.1), discrimination and toxicity (1.2), and human-computer interaction (5.1). Coverage is concentrated in misuse prevention, information integrity, and protection against harmful content domains.
This is an external regulation with broad cross-sectoral applicability. The document governs deepfake production and distribution across multiple sectors, with explicit focus on Information (media, broadcasting, telecommunications), Arts, Entertainment, and Recreation (motion pictures, performing arts), and implicit coverage of sectors where deepfakes could be used for fraud or manipulation including Finance and Insurance, Public Administration, and National Security.
The document primarily focuses on the Deploy and Operate and Monitor stages of the AI lifecycle, with specific requirements for content provenance technologies during deployment and ongoing disclosure requirements during operation. It does not substantially address earlier stages like planning, data collection, or model building.
The document explicitly addresses AI-generated content, specifically deepfakes and generative AI technologies. It focuses on 'advanced technological false personation records' produced through generative AI or similar technologies. The document does not mention AI models, AI systems, frontier AI, general purpose AI, foundation models, compute thresholds, or open-weight models. It is specifically scoped to generative AI applications that create deepfakes.
United States Congress
The document is titled 'DEEPFAKES Accountability Act' and is identified as being from the 'United States Congress' in the authority field, indicating Congress as the proposing legislative body.
Attorney General; United States Attorney's Office; Federal district courts
The Attorney General is explicitly designated as the primary enforcement authority with multiple enforcement responsibilities including designating coordinators, issuing regulations, and pursuing prosecutions. Federal district courts enforce through civil penalties and private rights of action.
Attorney General; United States Attorney's Office coordinators; Congress
The Attorney General is responsible for monitoring through designated coordinators who receive public reports, and must submit reports to Congress every 5 years describing trends in prosecutions and civil penalties. Congress receives these monitoring reports.
The document targets 'any person who, using any means or facility of interstate or foreign commerce, produces an advanced technological false personation record with the intent to distribute such record over the internet' - this includes both developers of deepfake technology and those who deploy/distribute such content. It also targets those who alter records to remove disclosures.
9 subdomains (6 Good, 3 Minimal)