Outlines key principles and aims for innovative and safe AI development use in the public and private sector. Details compliance and supervision mechanisms, with special attention to critical AI. Encourages international cooperation and imposes penalties for non-compliance measures.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a draft legislative instrument with binding legal obligations, enforcement mechanisms including administrative penalties and fines, and mandatory compliance requirements indicated by extensive use of 'shall' language throughout.
The document has good coverage of approximately 12-14 subdomains, with strong focus on discrimination and fairness (1.1, 1.3), privacy protection (2.1), AI system security (2.2), misinformation (3.1), malicious use prevention (4.1, 4.2, 4.3), overreliance (5.1), human agency (5.2), governance structures (6.5), and AI safety failures (7.1, 7.3, 7.4). Coverage is concentrated in discrimination/toxicity, privacy/security, malicious actors, human-computer interaction, and system safety domains.
This is a comprehensive national AI law that governs AI use across all economic sectors in China. It applies broadly to AI development, provision, and use activities across the entire economy, with specific provisions for government services, healthcare, finance, education, news/media, transportation (autonomous driving), and judicial applications. The law establishes cross-sectoral governance rather than sector-specific regulation.
The document comprehensively covers all AI lifecycle stages from planning through operational monitoring. It addresses design principles and planning (Articles 3-13), data collection and processing requirements (Articles 20-21, 45), model building and training (Articles 17-18, 51), verification and validation through risk assessments (Articles 42, 54), deployment requirements including registration (Article 53), and extensive operational monitoring obligations (Articles 41, 56, 63).
The document explicitly mentions AI models, AI systems, foundation models, and general purpose artificial intelligence (AGI). It establishes compute thresholds for critical AI classification based on parameters and scale. The document does not explicitly distinguish between generative AI and predictive AI, nor does it specifically address open-weight/open-source models or general purpose AI (GPAI) as distinct categories, though it does address open-source ecosystems generally.
Scholars (as indicated in the title 'Draft for Suggestions from Scholars'); The State Council and relevant departments of the State Council are referenced as having authority to formulate implementation rules
The document is explicitly titled as a 'Draft for Suggestions from Scholars,' indicating it was proposed by academic experts. The document references state organs and the State Council as having authority to implement and coordinate AI governance.
Main oversight departments for AI, relevant departments of the State Council, local people's governments at or above the county level, people's courts, people's procuratorates, provincial-level main oversight departments for AI
The document establishes 'main oversight departments for AI' as the primary enforcement bodies with authority to investigate violations, impose penalties, conduct inspections, and revoke licenses. People's courts handle litigation and people's procuratorates can file public interest lawsuits.
Main oversight departments for AI, expert committee on AI, third-party organizations, industry organizations, professional institutions, insurance companies (insurers)
The document establishes monitoring responsibilities for main oversight departments for AI, including conducting risk assessments, random inspections, and security monitoring. Third-party organizations and expert committees provide assessment and monitoring support. Industry organizations conduct self-regulation monitoring.
AI developers, AI providers, AI users, enterprises, small and medium-sized enterprises, individual developers, state organs, employers, educational and scientific research institutions, insurance companies, foreign investors
The document explicitly defines and regulates three categories of actors: AI developers (those who develop AI products and services), AI providers (those who provide AI products and services), and AI users (those who use AI products and services). These definitions are provided in Article 94 and obligations are specified throughout the document for each category.
20 subdomains (12 Good, 8 Minimal)