Requires healthcare entities using AI in California to comply with consumer protection, civil rights, competition, and data privacy laws. Ensures AI doesn't override doctor decisions, discriminate, or infringe patient privacy. Prohibits AI-driven practices like denying coverage based on stereotypes.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a legal advisory document that provides guidance on the application of existing California laws to AI in healthcare. It uses advisory language ('should ensure', 'should consider') and does not create new binding obligations, but rather interprets how existing hard law applies to AI systems.
The document has good coverage of approximately 10-12 subdomains, with strong focus on discrimination and toxicity (1.1, 1.3), privacy compromise (2.1), false information (3.1), fraud and manipulation (4.3), overreliance (5.1), loss of agency (5.2), governance failure (6.5), and system safety failures (7.3, 7.4). Coverage is concentrated in discrimination, privacy, misinformation, and AI system reliability domains.
This document exclusively governs the Health Care and Social Assistance sector, with comprehensive coverage of AI use by healthcare providers, insurers, hospitals, and related healthcare entities in California. It does not govern other sectors.
The document addresses multiple AI lifecycle stages with primary focus on deployment and operation/monitoring. It covers design considerations (non-discrimination requirements), data collection (privacy protections), model development (testing and validation requirements), and strongly emphasizes deployment requirements and ongoing monitoring obligations.
The document broadly refers to 'artificial intelligence (AI) and other automated decision systems' without defining specific technical categories. It mentions AI systems, AI models, and automated decisionmaking tools but does not explicitly distinguish between frontier AI, general purpose AI, foundation models, or other technical classifications. It does reference generative AI specifically in examples.
California Attorney General's Office (AGO)
The document explicitly states it is issued by the California Attorney General's Office to provide guidance on AI use in healthcare.
California Attorney General's Office; California Department of Managed Health Care; California Department of Insurance; FDA (Food and Drug Administration); relevant state agencies
The document references enforcement through existing California laws and regulatory agencies. The AGO itself enforces the Unfair Competition Law and other consumer protection statutes. Other state agencies are mentioned for inspection and audit authority.
California Attorney General's Office; relevant state agencies; California Department of Managed Health Care; California Department of Insurance
The document indicates that state agencies have authority to inspect and audit AI systems used by health care service plans, and the AGO is actively investigating AI use in healthcare.
healthcare providers; insurers; vendors; investors; healthcare entities; hospitals; medical professionals; physicians; health care service plans; electronic health records (EHR) companies; digital health companies
The advisory explicitly targets healthcare providers, insurers, vendors, and other healthcare entities that develop, sell, and use AI systems. It addresses both developers of AI systems and deployers (healthcare providers and insurers using AI).
10 subdomains (8 Good, 2 Minimal)