This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cannot be confidently classified due to insufficient information, excessive vagueness, or ambiguity.
Reasoning
Description too vague to identify specific mechanism or implementation approach needed for classification.
Explainability and predictability
Explainability and predictability of AI should be constantly improved to provide clear explanation for the internal structure, reasoning logic, technical interfaces, and output results of AI systems, accurately reflecting the process by which AI systems produce outcomes.
1 AI SystemSecure development standards
Secure development standards should be established and implemented in the design, R&D, deployment, and maintenance processes to eliminate as many security flaws and discrimination tendencies in models and algorithms as possible and enhance robustness.
2.4.2 Design StandardsSecurity rules
Security rules on data collection and usage, and on processing personal information should be abided by in all procedures of training data and user interaction data, including data collection, storage, usage, processing, transmission, provision, publication, and deletion. This aims to fully ensure user’s legitimate rights stipulated by laws and regulations, such as their rights to control, to be informed, and to choose.
2.1.3 Policies & ProceduresProtection of IPR
Protection of IPR should be strengthened to prevent infringement on IPR in stages such as selecting training data and result outputs.
2.1.3 Policies & ProceduresTrianing data selection
Training data should be strictly selected to ensure exclusion of sensitive data in high-risk fields such as nuclear, biological, and chemical weapons and missiles.
1.1.1 Training DataData security management
Data security management should be strengthened to comply with data security and personal information protection standards and regulations if training data contains sensitive personal information and important data.
2.3.2 Access & Security ControlsUse and filter data
To use truthful, precise, objective, and diverse training data from legitimate sources, and filter ineffective, wrong, and biased data in a timely manner.
1.1.1 Training DataComply with regulations
The cross-border provision of AI services should comply with the regulations on cross-border data flow. The external provision of AI models and algorithms should comply with export control requirements
3.1.4 Compliance RequirementsDisclosure and transparency
To properly disclose the principles, capacities, application scenarios, and safety risks of AI technologies and products, to clearly label outputs, and to constantly make AI systems more transparent.
3.1.4 Compliance RequirementsEnhance systems of multiple AI models
To enhance the risk identification, detection, and mitigation of platforms where multiple AI models or systems congregate, so as to prevent malicious acts or attacks and invasions that target the platforms from impacting the AI models or systems they support.
2.3 Operations & SecurityStrengthen capacity of constructing, managing, and operating AI
To strengthen the capacity of constructing, managing, and operating AI computing platforms and AI system services safely, with an aim to ensure uninterrupted infrastructure operation and service provision.
2.3 Operations & SecuritySupply chain security
To fully consider the supply chain security of the chips, software, tools, computing infrastructure, and data sources adopted for AI systems. To track the vulnerabilities and flaws of both software and hardware products and make timely repair and reinforcement to ensure system security
2.3.2 Access & Security ControlsTechnological measures to address risks
Responding to the above risks, AI developers, service providers, and system users should prevent risks by taking technological measures in the fields of training data, computing infrastructures, models and algorithms, product services, and application scenarios.
1 AI SystemTechnological measures to address risks > Addressing safety risks in AI applications
99 OtherComprehensive governance measures
2.1 Oversight & AccountabilityComprehensive governance measures > Implement a tiered and category-based management for AI application
We should classify and grade AI systems based on their features, functions, and application scenarios, and set up a testing and assessment system based on AI risk levels. We should bolster enduse management of AI, and impose requirements on the adoption of AI technologies by specific users and in specific scenarios, thereby preventing AI system abuse. We should register AI systems whose computing and reasoning capacities have reached a certain threshold or those are applied in specific industries and sectors, and demand that such systems possess the safety protection capacity throughout the life cycle including design, R&D, testing, deployment, utilization, and maintenance.
3.1.4 Compliance RequirementsComprehensive governance measures > Develop a traceability management system for AI services
We should use digital certificates to label the AI systems serving the public. We should formulate and introduce standards and regulations on AI output labeling, and clarify requirements for explicit and implicit labels throughout key stages including creation sources, transmission paths, and distribution channels, with a view to enable users to identify and judge information sources and credibility.
3.1.4 Compliance RequirementsComprehensive governance measures > Improve AI data security and personal information protection regulations
We should explicate the requirements for data security and personal information protection in various stages such as AI training, labeling, utilization, and output based on the features of AI technologies and applications.
3.1.1 Legislation & PolicyAI Safety Governance Framework
National Technical Committee 260 on Cybersecurity of SAC (2024)
Artificial Intelligence (AI), a new area of human development, presents significant opportunities to the world while posing various risks and challenges. Upholding a people-centered approach and adhering to the principle of developing AI for good, this framework has been formulated to implement the Global AI Governance Initiative and promote consensus and coordinated efforts on AI safety governance among governments, international organizations, companies, research institutes, civil organizations, and individuals, aiming to effectively prevent and defuse AI safety risks.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Unable to classify
Could not be classified to a specific AIRM function