Outlines six general guiding principles and 18 specific ethical requirements for fair, safe, and accountable AI development in China.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This document uses predominantly voluntary and aspirational language ('should', 'encourage', 'promote') with some mandatory prohibitions ('must not', 'forbidden'). While it includes some enforcement-oriented language, it lacks specific penalties, sanctions, or enforcement mechanisms, and relies primarily on ethical guidance and self-regulation.
The document has good coverage of approximately 10-12 subdomains, with strong focus on discrimination and fairness (1.1, 1.3), privacy protection (2.1), misinformation risks (3.1), security vulnerabilities (2.2), malicious use prevention (4.1, 4.2, 4.3), human agency (5.2), governance structures (6.5), and AI system robustness (7.3, 7.4). Coverage is concentrated in fairness, privacy, security, and governance domains.
This document applies broadly across all sectors as it governs AI activities by natural persons, legal persons, and organizations engaged in AI management, research and development, supply, and use, without limiting application to specific industries. The norms are sector-agnostic ethical principles applicable to any entity working with AI technology.
The document explicitly covers all stages of the AI lifecycle, from planning and design through deployment and ongoing monitoring. It structures ethical norms across management, research and development, supply, and use activities, with particular emphasis on development (data quality, algorithm design) and operational monitoring (emergency protection, user feedback).
The document uses the general term 'AI' throughout and refers to 'AI systems,' 'AI products and services,' and 'AI technology' without distinguishing between specific types such as general purpose, task-specific, foundation models, or generative AI. There are no mentions of compute thresholds, frontier AI, or open-weight models. The scope appears to cover all AI technologies broadly.
National Governance Committee of New Generation Artificial Intelligence
The document explicitly states in Article 23 that it is issued by the National Governance Committee of New Generation Artificial Intelligence, which is also responsible for explaining and guiding its implementation.
National Governance Committee of New Generation Artificial Intelligence; management departments at all levels; relevant entities
The National Governance Committee is responsible for guiding implementation (Article 23). Management departments are referenced in governance activities (Article 2) and Article 24. Article 17 mentions 'relevant entities to intervene in the AI systems in accordance with laws and regulations.'
management departments; AI developers and suppliers (self-monitoring); National Governance Committee of New Generation Artificial Intelligence
The document emphasizes self-monitoring and self-discipline by developers and suppliers (Articles 10, 15, 17). Management departments are responsible for supervision and inspection (Article 2). The National Governance Committee guides implementation (Article 23). Article 8 calls for systematic risk monitoring and evaluations.
natural persons, legal persons, and other related organizations engaged in management, research and development, supply, and use of AI; management departments at all levels; enterprises; universities; research institutes; associations
Article 2 explicitly defines the scope of application to include natural persons, legal persons, and organizations engaged in AI management, R&D, supply, and use. Article 24 further specifies management departments, enterprises, universities, research institutes, and associations as entities that may formulate more specific norms.
17 subdomains (7 Good, 10 Minimal)