Enacts the Artificial Intelligence Act, requiring developers and deployers of high-risk AI systems to prevent algorithmic discrimination, conduct impact assessments, and disclose risks. Implements risk management and consumer notification protocols. Grants enforcement authority to the state department of justice. Effective July 1, 2026.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding state legislative act with mandatory obligations, enforcement mechanisms through the state department of justice, civil action rights for consumers, and penalties under the Unfair Practices Act. The document uses mandatory language throughout ('shall') and establishes clear legal requirements for developers and deployers of high-risk AI systems.
The document has good coverage of approximately 6-8 subdomains, with strong focus on unfair discrimination (1.1, 1.3), privacy compromise (2.1), lack of transparency (7.4), and governance failure (6.5). Coverage is concentrated in discrimination/toxicity, privacy/security, and AI system safety domains, with minimal coverage of malicious actors, misinformation, or socioeconomic risks.
The document governs AI use across multiple sectors through its definition of 'consequential decisions' which explicitly covers education, employment, financial services, healthcare, housing, insurance, and legal services. The regulation applies to any deployer or developer whose AI systems make decisions in these domains.
The document covers multiple lifecycle stages with primary focus on deployment and operational monitoring. It addresses design considerations through developer documentation requirements, validation through impact assessments, deployment through notification requirements, and ongoing monitoring through post-deployment oversight. The act does not substantially cover data collection/processing or model building stages.
The document explicitly defines and covers AI systems and AI models but does not mention frontier AI, general purpose AI, task-specific AI, foundation models, generative AI, predictive AI, open-weight models, or compute thresholds. The focus is on 'high-risk artificial intelligence systems' that make consequential decisions.
New Mexico State Legislature
The document is a legislative bill enacted by the New Mexico State Legislature, as indicated by the opening enactment clause and the legislative bill number (HB 60).
New Mexico State Department of Justice
The State Department of Justice is explicitly designated as the primary enforcement authority with power to enforce the act, promulgate rules, receive disclosures, and evaluate compliance.
New Mexico State Department of Justice; Consumers (New Mexico residents)
The State Department of Justice monitors compliance through submission and evaluation of risk management policies, impact assessments, and records. Consumers also have monitoring roles through civil action rights and receiving disclosures about AI system use.
The act explicitly targets two categories of entities: developers (persons who develop or intentionally and substantially modify AI systems) and deployers (persons who deploy/use AI systems). These are the primary regulated entities throughout the document.
10 subdomains (4 Good, 6 Minimal)