This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Foundational safety research, theoretical understanding, and scientific inquiry informing AI development.
Also in Engineering & Development
Ensuring that AI systems remain reliable and secure in the face of adversarial manipulation, misaligned inputs, and uncertain conditions, such as by protecting against prompt-based exploits, poisoning attacks, and adversarial perturbations, and introducing control mechanisms and uncertainty quantification methods to maintain resilient system behavior at scale.
Reasoning
Adversarial training and uncertainty quantification shape model robustness through learning objectives and optimization targets.
Defending against jailbreaks and prompt injections
Improving state-of-the-art methods for discovering, evaluating, and defending against prompt injection and "jailbreaking" attacks. Research also focuses on structural defenses, such as detection, filtering, and paraphrasing of prompts, as well as addressing vulnerabilities stemming from a lack of robust privilege levels (e.g., system prompt vs. user instruction) in LLM inputs.
1.2 Non-ModelDefending against poisoning and backdoors
Understanding how LLMs can be compromised through data poisoning at various training stages, examining the effect of model scale on vulnerability, testing out-of-context reasoning under poisoning, and exploring attacks via additional modalities and encodings. This area also includes detecting and removing backdoors (i.e., Trojan detection) to ensure that covertly embedded harmful behaviors are mitigated.
1.1 ModelAdversarial robustness to perturbations
This area investigates how models can be made more resilient to carefully crafted adversarial perturbations designed to degrade performance or reveal vulnerabilities. Research involves identifying methods for bolstering model robustness under challenging conditions, including adversarial training and certified defenses.
1.1.2 Learning ObjectivesUncertainty quantification
Quantifying uncertainty in model predictions. Techniques include ensemble methods, conformal predictions, and Bayesian approaches to estimate and calibrate model confidence.
2.4.1 Research & FoundationsControl mechanisms for untrusted models
Designing and evaluating protocols to control outputs from untrusted models. This includes methods for monitoring backdoored outputs, integrating control measures with traditional insider risk management strategies, building safety cases for control tools, and employing white-box techniques (e.g., linear probes) for continuous oversight.
2.4.1 Research & FoundationsTheoretical foundations and provable safety in AI systems
Advancing the theoretical foundations of AI safety by building models and frameworks that ensure provably correct and robust behavior. These efforts span from verifiable architectures and formal verification methods to embedded agency, decision theory, incentive structures aligned with causal reasoning, and control theory.
2.4.1 Research & FoundationsTheoretical foundations and provable safety in AI systems > Building verifiable and robust AI architectures
Constructing AI systems with architectures that support formal verification and robustness guarantees, such as world models that enable safe and reliable planning, or guaranteed safe AI with Bayesian oracles. This area emphasizes simplicity and transparency to aid in provability.
1.1.4 Model ArchitectureTheoretical foundations and provable safety in AI systems > Formal verification of AI systems
Applying formal methods to verify that AI models and algorithms meet stringent safety, robustness, and performance criteria. This includes proving resilience against adversarial inputs and perturbations, and certifying conformance to specified safety properties under varying conditions.
2.2.2 Testing & EvaluationTheoretical foundations and provable safety in AI systems > Decision theory and rational agency
Establishing formal decision-making frameworks that ensure rational and safe choices by AI agents, potentially drawing on concepts like causal and evidential decision theory.
2.4.1 Research & FoundationsTheoretical foundations and provable safety in AI systems > Embedded agency
Explores how agents can model and reason about themselves and their environment as interconnected parts of a single system, addressing challenges like self-reference, resource constraints, and the stability of reasoning processes. This includes tackling problems arising from the lack of a clear boundary between the agent and its environment.
2.4.1 Research & FoundationsTheoretical foundations and provable safety in AI systems > Causal incentives
Developing frameworks that formalize how to align agent incentives with safe and desired outcomes by ensuring their causal understanding matches intended objectives. This research provides a formal language for guaranteeing safety, addressing challenges like goal misspecification, and complementing broader efforts in agent foundations and robust system design.
2.4.1 Research & FoundationsExpert Survey: AI Reliability & Security Research Priorities
O'Brien, Joe; Dolan, Jeremy; Kim, Jay; Dykhuizen, Jonah; Sania, Jeba; Becker, Sebastian; Kraprayoon, Jam; Labrador, Cara (2025)
Our survey of 53 specialists across 105 AI reliability and security research areas identifies the most promising research prospects to guide strategic AI R&D investment. As companies are seeking to develop AI systems with broadly human-level capabilities, research on reliability and security is urgently needed to ensure AI's benefits can be safely and broadly realized and prevent severe harms. This study is the first to quantify expert priorities across a comprehensive taxonomy of AI safety and security research directions and to produce a data-driven ranking of their potential impact. These rankings may support evidence-based decisions about how to effectively deploy resources toward AI reliability and security research.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks