This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Governance frameworks, formal policies, and strategic alignment mechanisms.
Also in Oversight & Accountability
Reasoning
Safety guidelines establish documented best practices for AI service provider implementation.
Service providers should publicize capabilities, limitations, target users, and use cases of AI products and services.
2.1.3 Policies & ProceduresAssess impact
Service providers should assess the impact of AI products on users, preventing harm to users' mental and physical health, life, and property.
2.2.1 Risk AssessmentInform users of application scope, precautions, and usage prohibitions
Service providers should inform users of the application scope, precautions, and usage prohibitions of AI products and services in a user-friendly manner within contracts or service agreements, supporting informed choices and cautious use by users.
2.1.3 Policies & ProceduresService providers should support users to undertake responsibilities of supervision and control within documents such as consent forms and service agreements.
2.1.3 Policies & ProceduresService providers should ensure that users understand AI products' accuracy, and prepare explanatory plans when AI decisions exert significant impact.
2.1.3 Policies & ProceduresReview responsibility statements
Service providers should review responsibility statements provided by developers to ensure that the chain of responsibility can be traced back to any recursively employed AI models.
2.2.3 Auditing & ComplianceService providers should increase awareness of AI risk prevention, establish and improve a real-time risk monitoring and management mechanism, and continuously track operational security risks.
2.3.3 Monitoring & LoggingAssess ability of AI products and services to withstand or overcome adverse conditions
Service providers should assess the ability of AI products and services to withstand or overcome adverse conditions under faults, attacks, or other anomalies, and prevent unexpected results and behavioral errors, ensuring that a minimum level of effective functionality is maintained.
2.2.2 Testing & EvaluationReport safety and security incidents and vulnerabilities
Service providers should promptly report safety and security incidents and vulnerabilities detected in AI system operations to competent authorities.
3.1.4 Compliance RequirementsRight to take corrective measures or terminate services
Service providers should stipulate in contracts or service agreements that they have the right to take corrective measures or terminate services early upon detecting misuse and abuse not conforming to usage intention and stated limitations.
2.1.3 Policies & ProceduresTechnological measures to address risks
Responding to the above risks, AI developers, service providers, and system users should prevent risks by taking technological measures in the fields of training data, computing infrastructures, models and algorithms, product services, and application scenarios.
1 AI SystemTechnological measures to address risks > Addressing AI’s inherent safety risks
99 OtherTechnological measures to address risks > Addressing safety risks in AI applications
99 OtherComprehensive governance measures
2.1 Oversight & AccountabilityComprehensive governance measures > Implement a tiered and category-based management for AI application
We should classify and grade AI systems based on their features, functions, and application scenarios, and set up a testing and assessment system based on AI risk levels. We should bolster enduse management of AI, and impose requirements on the adoption of AI technologies by specific users and in specific scenarios, thereby preventing AI system abuse. We should register AI systems whose computing and reasoning capacities have reached a certain threshold or those are applied in specific industries and sectors, and demand that such systems possess the safety protection capacity throughout the life cycle including design, R&D, testing, deployment, utilization, and maintenance.
3.1.4 Compliance RequirementsComprehensive governance measures > Develop a traceability management system for AI services
We should use digital certificates to label the AI systems serving the public. We should formulate and introduce standards and regulations on AI output labeling, and clarify requirements for explicit and implicit labels throughout key stages including creation sources, transmission paths, and distribution channels, with a view to enable users to identify and judge information sources and credibility.
3.1.4 Compliance RequirementsAI Safety Governance Framework
National Technical Committee 260 on Cybersecurity of SAC (2024)
Artificial Intelligence (AI), a new area of human development, presents significant opportunities to the world while posing various risks and challenges. Upholding a people-centered approach and adhering to the principle of developing AI for good, this framework has been formulated to implement the Global AI Governance Initiative and promote consensus and coordinated efforts on AI safety governance among governments, international organizations, companies, research institutes, civil organizations, and individuals, aiming to effectively prevent and defuse AI safety risks.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management