This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Technical mechanisms operating on non-model components of the AI system without modifying model weights. Components include: input/output interfaces, runtime environment, guardrail/monitoring classifiers, tool chain, and hardware.
Also in AI System
AI agent developers are tasked with ensuring the safety, transparency, and reliability of AI agents through specific measures
Reasoning
Description lacks concrete mitigation measures; insufficient specificity to identify focal activity.
Agent identifier
Explore and experimentally develop an AI agent identifier system, e.g., assigning a unique ID to each agent. Enhance monitoring capabilities through identity marking to ensure the transparency, traceability and controllability of agent behaviors, as well as to build trust among agents, to reduce potential conflicts or malfunctions.
1.2.3 Monitoring & DetectionAbility to undo agent operations
Establish an “undo” mechanism for agent operations to ensure that agent actions can be interrupted or rolled back in a timely manner when coordination failure, conflict escalation, or anomalous behavior is detected. This capability can be realized through preset security trigger conditions or manual intervention interfaces.
1.2.2 Runtime EnvironmentCommunication protocols between agents
Design and implement standardized communication protocols for agents to enhance the stability and security of multi-agent systems in safety-critical areas such as industrial control, transportation systems, or medical devices. The protocols optimize the efficiency of data exchange and reduce the risk of system failure due to miscommunications or delays.
1.1.4 Model ArchitectureMulti-agent collaborative behavior monitoring
Develop a real-time monitoring system to analyze the interaction patterns among multiple agents and identify potential systemic-level risks (e.g., cascading failures or unexpected amplification effects). Combine simulation testing with a dynamic adjustment strategy to ensure that the overall system behavior meets safety expectations
1.2.3 Monitoring & DetectionSafety Pre-training & Post-training Measures
The safety pre-training and post-training phase is a key line of defense against AI risks. The core objective is to enhance the model's alignment with human intent and ability to identify and refuse harmful instructions 56 , and to limit the formation and expression of dangerous capabilities from the outset.
1.1.2 Learning ObjectivesSafety Pre-training & Post-training Measures > Training data filters & unlearning
Filter out data that could be hazardous, such as bioweapon and gain-of-function-related knowledge. While currently less successful, unlearning techniques could also be applied to make hazardous knowledge more difficult for users to access.
1.1 ModelSafety Pre-training & Post-training Measures > Safety alignment training against harmful instructions
Through alignment training (e.g., RLHF/RLAIF) and red-team-driven fine-tuning, enhance the model's ability to recognize and refuse high-risk content related to violence, weapon development, etc.
1.1.2 Learning ObjectivesSafety Pre-training & Post-training Measures > Embedding safety values and behavioral constraints
Inject constraints aligned with values like honesty and controllability during training to ensure the model adheres to human intent in complex scenarios
1.1.2 Learning ObjectivesSafety Pre-training & Post-training Measures > Real-time monitoring of reasoning processes
Introduce automated chain-of-thought monitoring to identify anomalies or potentially malicious behaviors during reasoning, to help detect deceptive, conspiratorial, or manipulative outputs.
1.2.3 Monitoring & DetectionSafety Pre-training & Post-training Measures > Enhancing interpretability and formal verification
Use techniques like neural network reverse engineering to analyze internal mechanisms and identify risks; combine with formal verification methods to mathematically validate critical behaviors, increasing trustworthiness.
2.2 Risk & AssuranceFrontier AI Risk Management Framework (v1.0)
Tse, Brian; Fang, Liang; Xu, Jia; Duan, Yawen; Shao, Jing (2025)
The field of Artificial Intelligence (AI) is rapidly advancing, with systems increasingly performing at or above human levels across various domains. These breakthroughs offer unprecedented opportunities to address humanity's greatest challenges, from scientific breakthroughs and improved healthcare to enhanced economic productivity. However, this rapid progress also introduces unprecedented risks. As advanced AI development and deployment outpace crucial safety measures, the need for robust risk management has never been more critical. Shanghai Artificial Intelligence Laboratory is an advanced research institute focusing on AI research and application. Working in concert with universities and industry, we explore the future of AI by conducting original and forward-looking scientific research that makes fundamental contributions to basic theory as well as innovations in various technological fields. We strive to become a top-tier global AI Laboratory, committed to the safe and beneficial development of AI. To proactively navigate these challenges and foster a global “race to the top” in AI safety, we have proposed the AI-45° Law,1 a roadmap to trustworthy AGI. Introducing our Frontier AI Risk Management Framework Today, Shanghai AI Laboratory, in collaboration with Concordia AI,2 is proud to introduce the Frontier AI Risk Management Framework v1.0 (the “Framework”). We propose a robust set of protocols designed to empower general-purpose AI developers with comprehensive guidelines for proactively identifying, assessing, mitigating, and governing a set of severe AI risks that pose threats to public safety and national security, thereby safeguarding individuals and society. This framework serves as a guideline for general-purpose AI model developers to manage the potential severe risks from their general-purpose AI models. This framework aligns with standards and best practices in risk management of safety-critical industries. It encompasses six interconnected stages: risk identification, risk thresholds, risk analysis, risk evaluation, risk mitigation, and risk governance.
Deploy
Releasing the AI system into a production environment
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks
Primary
7 AI System Safety, Failures & Limitations