This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cannot be confidently classified due to insufficient information, excessive vagueness, or ambiguity.
Reasoning
Mitigation name is purely generic problem statement; no description or evidence provided to identify specific mechanism.
Security protection mechanism
A security protection mechanism should be established to prevent model from being interfered and tampered during operation to ensure reliable outputs.
1.2.4 Security InfrastructureData safeguard
A data safeguard should be set up to make sure that AI systems comply with applicable laws and regulations when outputting sensitive personal information and important data.
1.2.1 Guardrails & FilteringService limitations
To establish service limitations according to users’ actual application scenarios and cut AI systems’ features that might be abused. AI systems should not provide services that go beyond the preset scope
1.1.3 Capability ModificationTrace end use of AI systems
To improve the ability to trace the end use of AI systems to prevent high-risk application scenarios such as manufacturing of weapons of mass destruction, like nuclear, biological, chemical weapons and missiles.
2.3.3 Monitoring & LoggingIdentify and regulate outputs
To identify unexpected, untruthful, and inaccurate outputs via technological means, and regulate them in accordance with laws and regulations.
1.2.1 Guardrails & FilteringPrevent abuse of AI systems
Strict measures should be taken to prevent abuse of AI systems that collect, connect, gather, analyze, and dig into users’ inquiries to profile their identity, preference, and personal mindset.
2.3.2 Access & Security ControlsPrevent, detect, and navigate the cognitive warfare.
To intensify R&D of AI-generated content (AIGC) testing technologies, aiming to better prevent, detect, and navigate the cognitive warfare.
3.2.1 Benchmarks & EvaluationFilter training data and verify outputs
Training data should be filtered and outputs should be verified during algorithm design, model training and optimization, service provision and other processes, in an effort to prevent discrimination based on ethnicities, beliefs, nationalities, region, gender, age, occupation and health factors, among others.
1.1 ModelEmergency management and control measures
AI systems applied in key sectors, such as government departments, critical information infrastructure, and areas directly affecting public safety and people's health and safety, should be equipped with high-efficient emergency management and control measures.
2.3.4 Incident ResponseTechnological measures to address risks
Responding to the above risks, AI developers, service providers, and system users should prevent risks by taking technological measures in the fields of training data, computing infrastructures, models and algorithms, product services, and application scenarios.
1 AI SystemTechnological measures to address risks > Addressing AI’s inherent safety risks
99 OtherComprehensive governance measures
2.1 Oversight & AccountabilityComprehensive governance measures > Implement a tiered and category-based management for AI application
We should classify and grade AI systems based on their features, functions, and application scenarios, and set up a testing and assessment system based on AI risk levels. We should bolster enduse management of AI, and impose requirements on the adoption of AI technologies by specific users and in specific scenarios, thereby preventing AI system abuse. We should register AI systems whose computing and reasoning capacities have reached a certain threshold or those are applied in specific industries and sectors, and demand that such systems possess the safety protection capacity throughout the life cycle including design, R&D, testing, deployment, utilization, and maintenance.
3.1.4 Compliance RequirementsComprehensive governance measures > Develop a traceability management system for AI services
We should use digital certificates to label the AI systems serving the public. We should formulate and introduce standards and regulations on AI output labeling, and clarify requirements for explicit and implicit labels throughout key stages including creation sources, transmission paths, and distribution channels, with a view to enable users to identify and judge information sources and credibility.
3.1.4 Compliance RequirementsComprehensive governance measures > Improve AI data security and personal information protection regulations
We should explicate the requirements for data security and personal information protection in various stages such as AI training, labeling, utilization, and output based on the features of AI technologies and applications.
3.1.1 Legislation & PolicyAI Safety Governance Framework
National Technical Committee 260 on Cybersecurity of SAC (2024)
Artificial Intelligence (AI), a new area of human development, presents significant opportunities to the world while posing various risks and challenges. Upholding a people-centered approach and adhering to the principle of developing AI for good, this framework has been formulated to implement the Global AI Governance Initiative and promote consensus and coordinated efforts on AI safety governance among governments, international organizations, companies, research institutes, civil organizations, and individuals, aiming to effectively prevent and defuse AI safety risks.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Other
Risk management function not captured by the standard AIRM categories