This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Foundational safety research, theoretical understanding, and scientific inquiry informing AI development.
Also in Engineering & Development
Current detection and intervention systems face challenges that will likely intensify as models become more capable. Research efforts are focused on addressing these limitations through several complementary approaches. ● Improving Recall and Precision: Detection systems face an inherent tradeoff between catching concerning behavior (recall) and avoiding false positives that degrade user experience (precision). High-recall systems often over-refuse harmless queries, increase latency, or impose other costs on deployment. Promising research directions include developing more nuanced detection methods that can distinguish edge cases more accurately, creating adaptive systems that adjust sensitivity based on context and user history, and exploring ensemble approaches that combine multiple detection methods to achieve better recall-precision tradeoffs without sacrificing usability. ● Chain-of-Thought Monitoring and Faithfulness: Monitoring a model's externalized reasoning process offers a promising approach for detecting concerning intent before it manifests in outputs. For example, OpenAI's o1 system card describes their chain-of-thought deception monitoring setup, while Meta has openly released 'AlignmentCheck,' a tool designed to detect instances in a model's chain of thought where prompt injection might misalign agent behavior from a user's request. Google has also developed a framework for assessing whether models possess the stealth and situational awareness capabilities necessary for evading monitoring. However, as models become more sophisticated, they may learn to produce reasoning that appears benign while concealing harmful intent, potentially through reward hacking (exploiting flaws in reward signals to achieve high scores without genuinely safe behavior) during training. Establishing chain-of-thought faithfulness and effective monitoring will therefore involve multiple considerations, including keeping reasoning visible and legible to humans for auditing purposes, avoiding training approaches that might incentivize models to conceal their true reasoning, and developing methods to detect when externalized thoughts don't match actual behavior. ● Privacy-Preserving Monitoring and Retrospective Analysis: Maintaining logs for retrospective analysis can be a valuable and cost-effective enabler of other safety interventions, allowing developers to identify patterns that real-time detection systems miss and understand classifier blind spots through historical analysis. This becomes particularly important as developers face unfamiliar risks from sophisticated autonomous behavior in complex environments. However, comprehensive logging raises significant privacy concerns for users who may not want their interactions retained. As models gain more autonomous capabilities and broader access to external systems, finding approaches that provide sufficient visibility for safety while respecting user privacy represents an important area for future research and careful consideration of tradeoffs. ● Interpretability-Based Monitoring: Surface-level monitoring of inputs and outputs may miss concerning behavior that manifests only in a model's internal computations, especially as models develop more complex reasoning capabilities. Advances in mechanistic interpretability and internal activation analysis aim to detect concerning patterns directly from model internals, even when external behavior appears benign. This includes developing methods to identify and monitor safety-relevant circuits within models, creating real-time activation monitoring systems that can flag unusual internal states.
Reasoning
Mitigation name references "Promising Research Directions" but lacks concrete description of focal activity or mechanism.
Capability Limitation Mitigations
Capability limitation mitigations aim to prevent models from possessing knowledge or abilities that could enable harm. These methods alter the model’s weights or training process, so that it cannot assist with harmful actions when prompted by humans or autonomously pursue harmful objectives.
1.1.3 Capability ModificationCapability Limitation Mitigations > Data Filtering
Data filtering involves removing content from training datasets that could lead to dual-use or potentially harmful capabilities. Developers can use several methods: automated classifiers to identify and remove content related to weapons development, detailed attack methodologies, or other high-risk domains; keyword-based filters to exclude documents containing specific terminology or instructions of concern; and machine learning models trained to recognize subtle patterns in content that might contribute to dangerous capabilities.
1.1.1 Training DataCapability Limitation Mitigations > Exploratory Methods
Beyond data filtering, researchers are investigating additional capability limitation approaches
1.1.3 Capability ModificationCapability Limitation Mitigations
Capability limitation mitigations aim to prevent models from possessing knowledge or abilities that could enable harm. These methods alter the model's weights or training process, so that it cannot assist with harmful actions when prompted by humans or autonomously pursue harmful objectives. However, the effectiveness of these mitigations is an active area of research, and they can currently be circumvented if dual-use knowledge (knowledge that has both benign and harmful applications) is added in the context window during inference or fine-tuning.
1.1.3 Capability ModificationCapability Limitation Mitigations > 2.1 Data Filtering
Data filtering involves removing content from training datasets that could lead to dual-use or potentially harmful capabilities. Developers can use several methods: automated classifiers to identify and remove content related to weapons development, detailed attack methodologies, or other high-risk domains; keyword-based filters to exclude documents containing specific terminology or instructions of concern; and machine learning models trained to recognize subtle patterns in content that might contribute to dangerous capabilities.
1.1.1 Training DataCapability Limitation Mitigations > 2.2 Exploratory Methods
Beyond data filtering, researchers are investigating additional capability limitation approaches. However, these methods face technical challenges, and their effectiveness remains uncertain. ● Model distillation could create specialized versions of frontier models with capabilities limited to specific domains. For example, a model could excel at medical diagnosis while lacking knowledge needed for biological weapons development. While the capability limitations may be more fundamental than post-hoc safety training, it remains unclear how effectively this approach prevents harmful capabilities from being reconstructed. Additionally, multiple specialized models would be needed to cover various use cases, increasing development and maintenance costs. ● Targeted unlearning attempts to remove specific dangerous capabilities from models after initial training, offering a more precise alternative to full retraining. Possible approaches include fine-tuning on datasets to overwrite specific knowledge while preserving general capabilities, or modifying how models internally structure and access particular information. However, these methods may be reversible with relatively modest effort – restoring "unlearned" capabilities through targeted fine-tuning with small datasets. Models may also regenerate removed knowledge by inferring from adjacent information that remains accessible. While research continues on these approaches, developers currently rely more heavily on post-deployment mitigations that can be more reliably implemented and assessed.
1.1.3 Capability ModificationFrontier Mitigations
Frontier Model Forum (2025)
Frontier mitigations are protective measures implemented on frontier models, with the goal of reducing the risk of potential high-severity harms, especially those related to national security and public safety, that could arise from their advanced capabilities. This report discusses emerging industry practices for implementing and assessing frontier mitigations. It focuses on mitigations for managing risks in three primary domains: chemical, biological, radiological and nuclear (CBRN) information threats; advanced cyber threats; and advanced autonomous behavior threats. Given the nascent state of frontier mitigations, this report describes the range of controls and mitigation strategies being employed or researched by Frontier Model Forum members and documents the known limitations of these approaches.
Other (stage not listed)
Applies to a lifecycle stage not captured by the standard categories
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks