This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Technical mechanisms and engineering interventions that directly modify how an AI system processes inputs, generates outputs, or operates, including changes to models, training procedures, runtime behaviors, and supporting hardware.
Responding to the above risks, AI developers, service providers, and system users should prevent risks by taking technological measures in the fields of training data, computing infrastructures, models and algorithms, product services, and application scenarios.
Reasoning
Mitigation too vague; spans multiple AI system technical measures without specifying which mechanisms apply.
Comprehensive governance measures
2.1 Oversight & AccountabilityComprehensive governance measures > Implement a tiered and category-based management for AI application
We should classify and grade AI systems based on their features, functions, and application scenarios, and set up a testing and assessment system based on AI risk levels. We should bolster enduse management of AI, and impose requirements on the adoption of AI technologies by specific users and in specific scenarios, thereby preventing AI system abuse. We should register AI systems whose computing and reasoning capacities have reached a certain threshold or those are applied in specific industries and sectors, and demand that such systems possess the safety protection capacity throughout the life cycle including design, R&D, testing, deployment, utilization, and maintenance.
3.1.4 Compliance RequirementsComprehensive governance measures > Develop a traceability management system for AI services
We should use digital certificates to label the AI systems serving the public. We should formulate and introduce standards and regulations on AI output labeling, and clarify requirements for explicit and implicit labels throughout key stages including creation sources, transmission paths, and distribution channels, with a view to enable users to identify and judge information sources and credibility.
3.1.4 Compliance RequirementsComprehensive governance measures > Improve AI data security and personal information protection regulations
We should explicate the requirements for data security and personal information protection in various stages such as AI training, labeling, utilization, and output based on the features of AI technologies and applications.
3.1.1 Legislation & PolicyComprehensive governance measures > Create a responsible AI R&D and application system
We should propose pragmatic instructions and best practices to uphold the people-centered approach and adhere to the principle of developing AI for good in AI R&D and application, and continuously align AI’s design, R&D, and application processes with such values and ethics. We should explore the copyright protection, development and utilization systems that adapt to the AI era and continuously advance the construction of highquality foundational corpora and datasets to provide premium resources for the safe development of AI. We should establish AI-related ethical review standards, norms, and guidelines to improve the ethical review system.
2.1.3 Policies & ProceduresComprehensive governance measures > Strengthen AI supply chain security
We should promote knowledge sharing in AI, make AI technologies available to the public under open-source terms, and jointly develop AI chips, frameworks, and software. We should guide the industry to build an open ecosystem, enhance the diversity of supply chain sources, and ensure the security and stability of the AI supply chain.
3.3.1 Industry CoordinationAI Safety Governance Framework
National Technical Committee 260 on Cybersecurity of SAC (2024)
Artificial Intelligence (AI), a new area of human development, presents significant opportunities to the world while posing various risks and challenges. Upholding a people-centered approach and adhering to the principle of developing AI for good, this framework has been formulated to implement the Global AI Governance Initiative and promote consensus and coordinated efforts on AI safety governance among governments, international organizations, companies, research institutes, civil organizations, and individuals, aiming to effectively prevent and defuse AI safety risks.
Other (multiple stages)
Applies across multiple lifecycle stages
Other (multiple actors)
Applies across multiple actor types
Manage
Prioritising, responding to, and mitigating AI risks
Primary
7 AI System Safety, Failures & Limitations