Promotes a people-centered approach to AI governance, emphasizing stakeholder cooperation to prevent AI safety risks. Outlines safety guidelines for developers, service providers, and users, focusing on ethics, data protection, risk assessment, and mitigation strategies.
Analysis summaries, actor details, and coverage mappings were LLM-classified and may contain errors.
This is a voluntary governance framework that uses predominantly recommendatory language ('should') and lacks binding enforcement mechanisms or legal penalties. It is presented as a set of principles and guidelines to promote consensus and cooperation among stakeholders rather than legally enforceable obligations.
The document has comprehensive coverage of approximately 18-20 subdomains across all risk categories. It demonstrates particularly strong focus on AI system safety failures (7.1, 7.2, 7.3, 7.4), malicious actors (4.1, 4.2, 4.3), misinformation (3.1, 3.2), privacy and security (2.1, 2.2), discrimination and toxicity (1.1, 1.2, 1.3), and socioeconomic risks (6.1, 6.2, 6.4, 6.5). The framework provides detailed risk classifications and mitigation strategies across the entire AI lifecycle.
The document governs AI use across multiple critical sectors with explicit coverage of government/public administration, healthcare, finance, telecommunications, transportation, and energy/utilities. It provides sector-specific guidance for high-risk applications while establishing general governance principles applicable across all sectors.
The document comprehensively covers all stages of the AI lifecycle from planning through operational monitoring. It explicitly addresses design and planning (Section 6.1a), data collection and processing (Sections 3.1.2, 4.1.2), model building and training (Sections 3.1.1, 4.1.1), verification and validation (Sections 6.1g-j), deployment (Section 5.1), and ongoing monitoring and maintenance (Sections 5.7, 6.2f). The framework emphasizes whole-process governance throughout the entire AI lifecycle.
The document explicitly mentions AI models, AI systems, and AI algorithms throughout. It does not use the specific terms 'frontier AI', 'general purpose AI', 'foundation models', or 'generative AI' as defined categories, though it does reference generative AI capabilities in the context of risks. It mentions compute thresholds for registration requirements and addresses model reuse. The document does not explicitly distinguish between open-weight and closed models.
National Technical Committee 260 on Cybersecurity of SAC (Standardization Administration of China)
The document is authored by the National Technical Committee 260 on Cybersecurity of SAC, which is a technical standards committee under China's Standardization Administration. The framework is presented as implementing the Global AI Governance Initiative.
government authorities; competent authorities; industry associations
The document references government oversight and competent authorities in multiple sections, though enforcement is primarily through voluntary compliance and industry self-regulation rather than formal legal enforcement. Section 5.9 emphasizes industry self-regulation mechanisms.
government agencies; industry associations; social organizations; service providers; users
The document establishes a multi-stakeholder monitoring approach involving government agencies, industry bodies, and the entities themselves. Section 5.7 describes information sharing and emergency response mechanisms, while Section 5.9 establishes social supervision mechanisms.
model and algorithm researchers and developers; service providers; users; technology R&D institutions; government agencies; enterprises; public service units
The document explicitly targets multiple categories of actors throughout the AI lifecycle. Section 1.3 clearly identifies the target entities, and Sections 6.1-6.4 provide specific safety guidelines for different target groups including developers, service providers, users in key areas, and general users.
21 subdomains (14 Good, 7 Minimal)