This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Structured analysis to identify, characterize, and prioritize potential harms and risks.
Also in Risk & Assurance
Helps understand the dependencies between components and the system that they are part of, in order to anticipate how component failures could lead to systemlevel hazards
Reasoning
Structured analysis to identify, characterize, and prioritize potential harms and risks from AI systems.
Risk Taxonomy
A way to categorise and organise risks across multiple dimensions
2.2.1 Risk AssessmentEngagement with Relevant Experts and Communities
Domain experts, users, and impacted communities have unique insights into likely risks
2.2.1 Risk AssessmentDelphi Method
A group decision-making technique that uses a series of questionnaires to gather consensus from a panel of experts
2.2.1 Risk AssessmentThreat Modelling
A process to identify threats and vulnerabilities to a system
2.2.1 Risk AssessmentScenario Analysis
Developing plausible future scenarios and analysing how risks materialise
2.2.1 Risk AssessmentImpact Assessment
A tool used to assess the potential impacts of a technology or project
2.2.1 Risk AssessmentInternational AI Safety Report
Bengio, Yoshua; Mindermann, Sören; Privitera, Daniel; Besiroglu, Tamay; Bommasani, Rishi; Casper, Stephen; Choi, Yejin; Fox, Phillips; Garfinkel, Ben; Goldfarb, David A.; Heidari, Hoda; Ho, Anson; Kapoor, Sayash; Khalatbari, Leila; Longpre, Shayne; Manning, Sam; Mavroudis, Vasilios; Mazeika, Mantas; Michael, Julian; Newman, Jessica; Ng, TP; Okolo, Chinasa T.; Raji, Deborah; Sastry, Girish; Seger, Elizabeth; Skeadas, Theodora; South, Tobin; Strubell, Emma; Tramèr, Florian; Velasco, Luis; Wheeler, Nicole; Acemoglu, Daron; Adekanmbi, Olubayo; Dalrymple, David; Dietterich, Thomas G.; Felten, Edward W.; Fung, Pascale; Gourinchas, Pierre-Olivier; Heintz, F.; Hinton, Geoffrey E.; Jennings, Nicholas R.; Krause, Andreas; Leavy, Susan; Liang, Percy; Ludermir, Teresa; Marda, Vidushi; Margetts, Helen; McDermid, John; Munga, Jane; Narayanan, Arvind; Nelson, Alondra; Neppel, Clara; Oh, Alice; Ramchurn, Sarvapali D.; Russell, Stuart; Schaake, Marietje; Schölkopf, Bernhard; Song, Dawn; Soto, Álvaro; Tiedrich, Lee; Varoquaux, Gaël; Yao, Andrew I.; Zhang, Ya-Qin; Albalawi, Fahad; Alserkal, Marwan; Ajala, Oluremi N.; Avrin, Guillaume; Busch, Christoph; de Carvalho, André C. P. L. F.; Fox, B. W.; Gill, A. S.; Hatip, Ahmet; Heikkilä, J. K.; Jolly, Gill; Katzir, Ziv; Kitano, Hiroaki; Kruger, Adèle; Johnson, Chris A.; Khan, Saif; Lee, Kyoung Mu; Ligot, Dominic Vincent; Molchanovskyi, Oleksii; Monti, Andrea; Mwamanzi, Nusu; Nemer, Mona; Oliver, Nuria; Portillo, José; Ravindran, Balaraman; Pezoa Rivera, Raquel; Riza, Hammam; Rugege, Crystal; Seoighe, Ciarán; Sheehan, Katherine M.; Sheikh, Haroon; Wong, David; Zeng, Yi (2025)
Artificial Intelligence (AI) is progressing rapidly, and companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems. Although researchers have warned of extreme risks from AI1, there is a lack of consensus about how exactly such risk arise, and how to manage them. Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts. AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems. In this short consensus paper, we describe extreme risks from upcoming, advanced AI systems. Drawing on lessons learned from other safety-critical technologies, we then outline a comprehensive plan combining technical research and development (R&D) with proactive, adaptive governance mechanisms for a more commensurate preparation.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Developer
Entity that creates, trains, or modifies the AI system
Map
Identifying and documenting AI risks, contexts, and impacts