Establishes property rights in likeness and voice, transferable post-mortem for ten years. Prohibits unauthorized cloning and replicas, imposing liabilities. Specifies damages per violation. Allows punitive damages and attorney fees. Provides First Amendment defense. Exempts negligible harm. Sets four-year limitations period.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding federal statute (H.R. 6943) introduced in the U.S. House of Representatives that establishes enforceable property rights, prohibits specific conduct, and imposes mandatory civil liabilities with specific damage amounts, punitive damages, and attorney fees.
The document primarily addresses risks related to malicious actors (4.1, 4.3), privacy compromise (2.1), misinformation (3.1), and human-computer interaction (5.1). It focuses on unauthorized use of AI to create deepfakes and voice replicas for fraud, manipulation, and non-consensual content creation. Coverage is concentrated in misuse prevention, privacy protection, and addressing deceptive AI applications.
The Act governs AI use across multiple sectors, with strongest coverage in Arts, Entertainment, and Recreation (music, film, celebrity likeness), Information (social media platforms, content distribution), and Professional and Technical Services (advertising, marketing). It also applies to Educational Services and potentially other sectors where AI-generated voice/likeness replicas could be used.
The document does not focus on specific AI lifecycle stages but rather regulates the deployment and use of AI-generated outputs (digital voice replicas and depictions). It primarily addresses the Deploy and Operate stages by prohibiting unauthorized distribution of personalized cloning services and AI-generated content.
The document explicitly addresses AI technology broadly, including artificial intelligence, machine learning, algorithms, and digital technology used to create voice replicas and visual depictions. It does not distinguish between different types of AI models (frontier, general purpose, task-specific, foundation, generative, predictive) or mention compute thresholds or open-weight models.
Ms. Salazar, Ms. Dean of Pennsylvania, Mr. Moran, Mr. Morelle, and Mr. Wittman (U.S. House of Representatives members); Committee on the Judiciary
The bill was introduced in the House of Representatives by Representative Salazar and co-sponsors, then referred to the Committee on the Judiciary for consideration.
Federal courts; individuals whose voice or likeness is at issue; assignees or exclusive licensees of voice/likeness rights; persons or entities with exclusive contracts for recording artists' services
Enforcement occurs through civil actions brought in federal courts. Standing to sue is granted to the affected individual, their assignees/licensees, or in the case of music professionals, entities with exclusive contracts for their services.
The Act does not establish any monitoring body or oversight mechanism. It relies on private civil enforcement through lawsuits rather than regulatory monitoring.
Any person or entity who distributes personalized cloning services, publishes digital voice replicas or digital depictions, or materially contributes to such conduct
The Act targets any person or entity who distributes personalized cloning services or publishes unauthorized digital voice replicas/depictions affecting interstate or foreign commerce, including those who materially contribute to or facilitate such conduct.
8 subdomains (3 Good, 5 Minimal)