This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Internal policies, content safety guidelines, and ethical design principles governing system creation.
Also in Engineering & Development
Reasoning
Creates internal safety guidelines governing system design and user interaction in key areas.
Assess long-term and potential impacts
For users in key sectors such as government departments, critical information infrastructure, and areas directly affecting public safety and people's health and safety, they should prudently assess the long-term and potential impacts of applying AI technology in the target application scenarios and conduct risk assessments and grading to avoid technology abuse.
2.2.1 Risk AssessmentSystem audits
Users should regularly perform system audits on the applicable scenarios, safety, reliability, and controllability of AI systems, while enhancing awareness of risk prevention and response capabilities.
2.2.3 Auditing & ComplianceUsers should fully understand its data processing and privacy protection measures before using an AI product.
2.4.4 Training & AwarenessUsers should use high-security passwords and enable multi-factor authentication mechanisms to enhance account security
2.3.2 Access & Security ControlsEnhance network and supply chain security
Users should enhance their capabilities in areas such as network security and supply chain security to reduce the risk of AI systems being attacked and important data being stolen or leaked, as well as ensure uninterrupted business.
2.3.2 Access & Security ControlsUsers should properly limit data access, develop data backup and recovery plans, and regularly check data processing flow
2.3.2 Access & Security ControlsUsers should ensure that operations comply with confidentiality provisions and use encryption technology and other protective measures when processing sensitive data.
2.3.2 Access & Security ControlsSupervise AI
Users should effectively supervise the behavior and impact of AI, and ensure that AI products and services operate under human authorization and remain subject to human control.
2.3.3 Monitoring & LoggingAvoid complete reliance on AI
Users should avoid complete reliance on AI for decision making, monitor and record instances where users turn down AI decisions, and analyze inconsistencies in decision-making. They should have the capability to swiftly shift to human-based or traditional methods in the event of an accident.
2.3.4 Incident ResponseTechnological measures to address risks
Responding to the above risks, AI developers, service providers, and system users should prevent risks by taking technological measures in the fields of training data, computing infrastructures, models and algorithms, product services, and application scenarios.
1 AI SystemTechnological measures to address risks > Addressing AI’s inherent safety risks
99 OtherTechnological measures to address risks > Addressing safety risks in AI applications
99 OtherComprehensive governance measures
2.1 Oversight & AccountabilityComprehensive governance measures > Implement a tiered and category-based management for AI application
We should classify and grade AI systems based on their features, functions, and application scenarios, and set up a testing and assessment system based on AI risk levels. We should bolster enduse management of AI, and impose requirements on the adoption of AI technologies by specific users and in specific scenarios, thereby preventing AI system abuse. We should register AI systems whose computing and reasoning capacities have reached a certain threshold or those are applied in specific industries and sectors, and demand that such systems possess the safety protection capacity throughout the life cycle including design, R&D, testing, deployment, utilization, and maintenance.
3.1.4 Compliance RequirementsComprehensive governance measures > Develop a traceability management system for AI services
We should use digital certificates to label the AI systems serving the public. We should formulate and introduce standards and regulations on AI output labeling, and clarify requirements for explicit and implicit labels throughout key stages including creation sources, transmission paths, and distribution channels, with a view to enable users to identify and judge information sources and credibility.
3.1.4 Compliance RequirementsAI Safety Governance Framework
National Technical Committee 260 on Cybersecurity of SAC (2024)
Artificial Intelligence (AI), a new area of human development, presents significant opportunities to the world while posing various risks and challenges. Upholding a people-centered approach and adhering to the principle of developing AI for good, this framework has been formulated to implement the Global AI Governance Initiative and promote consensus and coordinated efforts on AI safety governance among governments, international organizations, companies, research institutes, civil organizations, and individuals, aiming to effectively prevent and defuse AI safety risks.
Deploy
Releasing the AI system into a production environment
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management