Establishes consumer protections against AI-induced discrimination. Requires developers and deployers of high-risk AI systems to document, assess, and mitigate algorithmic discrimination risks, and mandates notification to consumers about AI-influenced decisions. Empowers the Attorney General for enforcement.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding legislative statute with mandatory obligations, enforcement mechanisms through the Attorney General, and penalties for non-compliance under Chapter 93A (unfair trade practices).
The document has good coverage of approximately 5-6 subdomains, with strong focus on unfair discrimination (1.1), unequal performance across groups (1.3), lack of transparency (7.4), and governance failure (6.5). Coverage is concentrated in discrimination/toxicity and system safety domains, with minimal coverage of privacy, misinformation, or malicious actor risks.
The document governs AI use across multiple sectors through its definition of 'consequential decisions'. Primary sectors with good coverage include Educational Services, Finance and Insurance, Health Care and Social Assistance, Real Estate, Professional and Technical Services, and Public Administration. The regulation applies to any AI system making decisions affecting education, employment, financial services, government services, healthcare, housing, insurance, or legal services.
The document comprehensively covers multiple lifecycle stages with primary focus on Deploy and Operate and Monitor stages. It addresses planning through risk management requirements, data collection through data governance measures, model building through documentation of training, verification through impact assessments, deployment through notification requirements, and ongoing monitoring through annual reviews and post-deployment oversight.
The document explicitly defines and extensively covers AI systems and high-risk AI systems. It does not explicitly mention frontier AI, general purpose AI, task-specific AI, foundation models, generative AI, predictive AI, open-weight models, or specific compute thresholds. The focus is on AI systems that make consequential decisions affecting consumers.
Massachusetts House of Representatives
This is Massachusetts House Docket 4053 (2025), indicating it was proposed by the Massachusetts state legislature through the House of Representatives.
Attorney General of Massachusetts
The Attorney General is explicitly granted exclusive enforcement authority over this chapter, with power to bring enforcement actions, require disclosures, and promulgate implementing rules.
Attorney General of Massachusetts
The Attorney General has monitoring authority through disclosure requirements and evaluation powers. Additionally, developers and deployers have self-monitoring obligations including ongoing testing, annual reviews, and post-deployment monitoring.
The document explicitly targets developers and deployers of high-risk AI systems doing business in Massachusetts. Developers are defined as persons developing or substantially modifying AI systems, while deployers are persons using high-risk AI systems.
7 subdomains (5 Good, 2 Minimal)