Defines "artificial intelligence" broadly, covering systems that infer and generate influential outputs. Requires risk assessments for processing activities using AI, especially when impacting significant consumer decisions or involving extensive profiling. Mandates transparency and consumer rights to opt-out of automated decisionmaking technology. Enforces privacy safeguards and oversight on AI applications to prevent discrimination and ensure accuracy.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding regulatory instrument with mandatory language throughout ('must', 'shall', 'required'), explicit enforcement mechanisms, penalties for non-compliance, and formal regulatory authority from the California Privacy Protection Agency.
The document has good coverage of approximately 8-10 subdomains, with strong focus on discrimination and unfair treatment (1.1, 1.3), privacy compromise (2.1), AI system security (2.2), lack of transparency (7.4), and lack of robustness (7.3). Coverage is concentrated in discrimination/toxicity, privacy/security, and AI system safety domains.
This regulation applies broadly across all economic sectors where businesses process consumer personal information using AI or automated decisionmaking technology. It has particularly detailed coverage for sectors involving significant decisions: Finance and Insurance, Health Care, Educational Services, Professional and Technical Services, and Public Administration. The regulation is sector-agnostic in its core requirements but provides sector-specific examples throughout.
The document comprehensively covers multiple AI lifecycle stages with primary focus on Build and Use Model, Verify and Validate, Deploy, and Operate and Monitor stages. It addresses training of AI systems, evaluation and testing requirements, deployment decisions, and ongoing monitoring obligations. The Plan and Design stage is implicitly covered through risk assessment requirements before processing initiation.
The document explicitly defines and covers both AI and automated decisionmaking technology broadly. It addresses AI systems, AI models (including generative models like large language models), and various AI applications. It does not specifically mention frontier AI, general purpose AI, task-specific AI, foundation models, or compute thresholds. It does address deepfakes and covers both generative capabilities and predictive/decision-making applications.
California Privacy Protection Agency
The document is a regulatory instrument proposed by the California Privacy Protection Agency (CPPA) to add provisions to existing California Consumer Privacy Act regulations. The title explicitly identifies this as a CPPA draft regulation.
California Privacy Protection Agency; Attorney General
The California Privacy Protection Agency has authority to receive risk assessment submissions and request unabridged assessments. The Attorney General is explicitly mentioned as having authority to request risk assessments and enforce compliance.
California Privacy Protection Agency; Attorney General
The Agency monitors compliance through mandatory risk assessment submissions, certifications of compliance, and has authority to review unabridged assessments. The Attorney General also has monitoring authority through request powers.
Businesses that process consumer personal information using AI or automated decisionmaking technology, service providers, contractors
The regulations apply to 'businesses' that use automated decisionmaking technology or AI to process consumer personal information, particularly for significant decisions, profiling, or training AI systems. Service providers and contractors are also subject to specific requirements.
10 subdomains (6 Good, 4 Minimal)