This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Practices for assessing AI systems, including testing, red teaming, risk assessment, auditing, and compliance verification.
Also in Organisation
Interpretability techniques aim to provide qualitative or quantitative evidence of system trustworthiness based on insights into why the AI system behaves the way it does.
Reasoning
Interpretability techniques modify model design to provide insights into system behavior and trustworthiness.
Mechanistic interpretability
Techniques for understanding how models function and represent concepts internally uniquely allow for assessments of internal model cognition. These techniques could aid in the discovery of system properties or, if thorough enough, aid in constructing safety cases (Clymer, Buhl) for them (Sharkey). For example, mechanistic interpretability techniques might be able to help researchers characterise and intervene on model representations that correspond to harmful concepts such as deception or malice. Current research frontiers in mechanistic interpretability involve developing scalable techniques that beat black-box baselines for identifying and addressing flaws in systems. Mechanistic understanding of models could also help verify the success of other methods which are imperfect, such as unlearning of dangerous capabilities (see above) and analysing written ‘chains-of-thought’ (see below).
2.4.1 Research & FoundationsExplainability
Explainability techniques refer to methods that allow model behaviours to be attributed to specific features in their inputs. They can be useful for both diagnosing system errors and determining accountability for system failures (Gryz, Casper-B). However, current explainability tools are often unreliable (Bordt), highlighting the value of future work to improve on existing tools.
1.2.9 OtherLLM chain-of-thought faithfulness and legibility
Large language model chain-ofthought reasoning does not always faithfully represent how a model arrived at its answer (Turpin). This poses challenges to safety because, without faithful reasoning, models could fool overseers by saying one thing and doing another. For example, language models have stated that they gave their answer based on a logical argument when they actually chose it based on hints that they should not have exploited (Anthropic-D, Turpin), such as seeing that the correct answer is always “B”. One potential challenge with chain-of-thought monitoring stems from how, under optimisation pressure on their reasoning, systems may learn to obfuscate their reasoning in ways that can be actively misleading (OpenAI-E, see also 2.1.1 above for an example).
1.1.2 Learning ObjectivesAttributing model behaviours to training data
Methods for attributing model behaviours to specific examples from training data allow overseers to study how potentially harmful behaviours emerge in systems (Grosse). These tools could also help researchers identify what types of training interventions can mitigate them. For example, attributing control-subverting behaviours to specific examples from training data could help developers curate safer pretraining datasets. Research frontiers include improving the efficiency and scalability of these methods, causally studying how models develop personas and behaviours (Anthropic-F), and predicting what data is needed to learn a particular behaviour (Engstrom, Ilyas).
1.1.1 Training DataStudying goals in systems
Increasingly agentic AI systems are characterised by increasingly goal-oriented behaviour. As a result, studying the emergence and mechanisms behind these behaviours offers a way for researchers to study the system’s alignment with its specification (Ngo). However, it is challenging to study goals in AI systems because they cannot be inspected directly and their behaviour is sometimes but not always consistent with coherent principles (Khan, Mazeika). Directions for future work involve developing concrete definitions and measures of goals in AI systems (e.g. MacDermott) and interpreting how AI systems develop and represent goals internally (e.g. Marks).
2.4.1 Research & FoundationsRisk Assessment
The primary goal of risk assessment is to understand the severity and likelihood of a potential harm. Risk assessments are used to prioritise risks and determine if they cross thresholds that demand specific action. Consequential development and deployment decisions are predicated on these assessments. The research areas in this category involve: A. Developing methods to measure the impact of AI systems for both current and future AI – This includes developing standardised assessments for risky behaviours of AI systems through audit techniques and benchmarks, evaluation and assessment of new capabilities, including potentially dangerous ones; and for real-world societal impact such as labour, misinformation and privacy through field tests and prospective risks analysis. B. Enhancing metrology to ensure that the measurements are precise and repeatable – This includes research in technical methods for quantitative risk assessment tailored to AI systems to reduce uncertainty and the need for large safety margins. This is an important open area of research. C. Building enablers for third-party audits to support independent validation of risk assessments – This includes developing secure infrastructure that enables thorough evaluation while protecting intellectual property, including preventing model theft.
2.2.1 Risk AssessmentRisk Assessment > Audit techniques and benchmarks
Techniques and benchmarks with which AI systems can be effectively and efficiently tested for harmful behaviours are highly varied and central to risk assessments (IAISR, Birhane-A).
3.2.1 Benchmarks & EvaluationRisk Assessment > Downstream impact assessment and forecasting
Assessing and forecasting the many societal impacts of AI systems is one of the most central goals of risk assessments.
2.2.1 Risk AssessmentRisk Assessment > Secure evaluation infrastructure
External auditors and oversight bodies need infrastructure and protocols that enable thorough evaluation while protecting sensitive intellectual property. Ideally, evaluation infrastructure should enable double-blindness: the evaluator’s inability to directly access the system’s parameters and developers’ inability to know what exact evaluations are run (Reuel, Bucknall-A, Casper-B). Meanwhile, the importance of mutual security will continue to grow as system capabilities and risks increase. Methods for developing secure infrastructure for auditing and oversight are known to be possible.
3.2.2 Technical StandardsRisk Assessment > System safety assessment
Safety assessment is not just about individual AI systems, but also their interaction with the rest of the world. For example, when an AI company discovers concerning behaviour from their system, the resulting risks depend, in part, on having internal processes in place to escalate the issue to senior leadership and work to mitigate the risks. System safety considers both AI systems and the broader context that they are deployed in. The study of system safety focuses on the interactions between different technical components as well as processes and incentives in an organisation (IAISR, Hendrycks-B, AISES, Alaga).
2.2.1 Risk AssessmentRisk Assessment > Metrology for AI risk assessment
Metrology, the science of measurement, has only recently been studied in the context of AI risk assessment (IAISR, Hobbhahn). Current approaches generally lack standardisation, repeatability, and precision.
3.2.1 Benchmarks & EvaluationThe Singapore Consensus on Global AI Safety Research Priorities
Bengio, Yoshua; Maharaj, Tegan; Ong, C.-H. Luke; Russell, Stuart D.; Song, Dawn; Tegmark, Max; Lan, Xue; Zhang, Ya-Qin; Casper, Stephen; Lee, Wan Sie; Mindermann, Sören; Wilfred, Vanessa; Balachandran, Vidhisha; Barez, Fazl; Belinsky, Michael; Bello, Imane; Bourgon, Malo; Brakel, Mark; Campos, Siméon; Cass-Beggs, Duncan; Chen, Jiahao; Chowdhury, Rumman; Seah, Kuan Chua; Clune, Jeff; Dai, Jie; Delaborde, Agnes; Dziri, Nouha; Eiras, Francisco; Engels, Joshua; Fan, Jinyu; Gleave, Adam; Goodman, Noah D.; Heide, Fynn; Heidecke, Johannes; Hendrycks, Dan; Hodes, Cyrus; Hsiang, Bryan Low Kian; Huang, Minlie; Jawhar, Sami; Wang, Jingyu; Kalai, Adam Tauman; Kamphuis, Meindert; Kankanhalli, Mohan; Kantamneni, Subhash; Kirk, M.; Kwa, Thomas; Ladish, Jeffrey; Lam, Kwok-Yan; Lee, Wan Sie; Lee, Taewhi; Li, Xiaopeng; Liu, Jiajun; Lu, Ching-Cheng; Mai, Yifan; Mallah, Richard; Michael, Julian; Moës, Nick; Møller, Simon Geir; Nam, K. H.; Ng, TP; Nitzberg, Mark; Nushi, Besmira; Ó hÉigeartaigh, Seán; Ortega, Alejandro; Peigné, Pierre; Petrie, J. Howard; Prud'homme, Benjamin; Rabbany, Reihaneh; Sanchez-Pi, Nayat; Schwettmann, Sarah; Shlegeris, Buck; Siddiqui, Saad; Sinha, Ashish; Soto, Martín; Tan, Cheston; Dong, Ting; Tjhi, William; Trager, Robert; Tse, Brian; Tung, Anthony K. H.; Willes, John; Wong, David; Xu, Wei; Xu, Rong; Zeng, Yi; Zhang, Hao; Žikelić, Djordje (2025)
This is the first International AI Safety Report. Following an interim publication in May 2024, a diverse group of 96 Artificial Intelligence (AI) experts contributed to this first full report, including an international Expert Advisory Panel nominated by 30 countries, the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN). The report aims to provide scientific information that will support informed policymaking. It does not recommend specific policies…. This report summarises the scientific evidence on the safety of general-purpose AI. The purpose of this report is to help create a shared international understanding of risks from advanced AI and how they can be mitigated. To achieve this, this report focuses on general-purpose AI – or AI that can perform a wide variety of tasks – since this type of AI has advanced particularly rapidly in recent years and has been deployed widely by technology companies for a range of consumer and business purposes. The report synthesises the state of scientific understanding of general-purpose AI, with a focus on understanding and managing its risks. Amid rapid advancements, research on general-purpose AI is currently in a time of scientific discovery, and – in many cases – is not yet settled science. The report provides a snapshot of the current scientific understanding of general-purpose AI and its risks. This includes identifying areas of scientific consensus and areas where there are different views or gaps in the current scientific understanding. People around the world will only be able to fully enjoy the potential benefits of general- purpose AI safely if its risks are appropriately managed. This report focuses on identifying those risks and evaluating technical methods for assessing and mitigating them, including ways that general-purpose AI itself can be used to mitigate risks.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks