Regulates AI-related activities, requiring ethics reviews for research involving human-computer fusion systems and automated decision-making systems impacting safety or health. Directs entities to establish ethics review committees. Governs algorithmic data handling, mandating transparency and ethical risk assessments.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a binding regulatory instrument with mandatory language throughout, explicit enforcement mechanisms including penalties and sanctions, and formal administrative oversight structures. The document uses mandatory terms like 'shall' extensively and establishes legal consequences for non-compliance.
The document has good coverage of approximately 10-12 subdomains, with strong focus on privacy compromise (2.1), AI system security (2.2), overreliance and unsafe use (5.1), loss of agency and autonomy (5.2), governance failure (6.5), goal misalignment (7.1), dangerous capabilities (7.2), lack of robustness (7.3), and lack of transparency (7.4). Coverage is concentrated in privacy/security, human-computer interaction, governance, and AI safety domains.
The document primarily governs the Scientific Research and Development Services sector, with significant coverage of Health Care and Social Assistance, Educational Services, and Information sectors. It applies to research institutions, universities, medical facilities, and enterprises conducting S&T activities in areas such as AI, life sciences, and medicine.
The document comprehensively covers all stages of the AI lifecycle, with particularly strong emphasis on the Plan and Design, Verify and Validate, Deploy, and Operate and Monitor stages. It establishes ethics review requirements that span from initial planning through ongoing operational monitoring.
The document explicitly mentions AI systems, algorithmic models, and automated decision-making systems. It does not use terms like 'frontier AI', 'general purpose AI', 'foundation models', or 'generative AI' but focuses on functional descriptions of AI capabilities and risks. There is no mention of compute thresholds or open-weight models.
Ministry of Science and Technology (MOST); Chinese central government
The document is formulated by the Ministry of Science and Technology as indicated by Article 55 which states 'MOST shall be responsible for the interpretation of these Measures' and the document header indicating Chinese central government authority.
Ministry of Science and Technology (MOST); local and relevant main industrial oversight departments; provincial-level S&T administrative departments; project management departments
The document establishes MOST as the primary enforcement authority with responsibility for overall guidance, while local and industrial oversight departments are responsible for supervision and management in their respective jurisdictions. Multiple articles detail enforcement powers and procedures.
S&T ethics (review) committees; Ministry of Science and Technology (MOST); local and relevant main industrial oversight departments; National Science and Technology Ethics Commission
The document establishes S&T ethics review committees as the primary monitoring bodies for ongoing S&T activities, with MOST providing oversight through a national information registration platform. The National Science and Technology Ethics Commission provides professional advice on important oversight matters.
Institutes of higher education, scientific research institutions, medical and health institutions, and enterprises engaged in S&T activities in areas such as life sciences, medicine, and artificial intelligence
The document explicitly targets work units conducting S&T activities, particularly those in AI, life sciences, and medicine. Article 4 specifically identifies institutes of higher education, scientific research institutions, medical and health institutions, and enterprises as entities responsible for establishing ethics review committees.
12 subdomains (9 Good, 3 Minimal)