This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Staged rollout strategies, phased deployment, and tiered access approaches for production systems.
Also in Operations & Security
Society can affect which AI systems are made available, to whom, and with what degrees of access. For example, companies can employ “staged release”: gradually making the system more widely available (Solaiman 2023). They could make potentially risky models available only via an API, allowing them to implement secure safeguards, such as watermarking or content provenance tags (Shevlane 2022). They could enforce terms of service policies, removing access from customers who use the system in prohibited ways.
Reasoning
Staged release and API-only deployment control access breadth and timing, implementing phased distribution strategy.
Capability-Modifying Interventions
Capability-modifying interventions intervene at points immediately preceding the “development” and “diffusion” steps
1 AI SystemCapability-Modifying Interventions > Development interventions
Society can affect which AI capabilities are developed. For example, companies could refrain from developing systems that have certain potentially harmful capabilities, or make systems that are more resistant to jailbreaking, have higher chances of refusing potentially harmful requests, or have outputs that can be more easily identifiable as AI-generated.
1.1 ModelAdaptation Interventions
Adaptation interventions, the primary focus of this paper, intervene at later stages in the causal chain. Such interventions immediately precede the “use”, “initial harm” or “impact” stages of that chain. (Occasionally, a specific intervention can affect multiple points along the causal chain.)
3 EcosystemAdaptation Interventions > Avoidance interventions
Society can reduce the expected extent of the potentially harmful use of AI, making the problematic actions in question more difficult to engage in, or more costly compared to relevant alternatives.12 One can make it more difficult for a given instance of potentially harmful AI activity to occur by limiting the user’s or the AI system’s access to key resources that are required for the activity in question, or to key actuators that are required for completion of the intended action. (In the spear phishing example we introduced in Section 3.1: relevant companies could make it harder for cybercriminals to access the names and contact details of their staff.) One can make potentially harmful uses of AI more ex ante costly by building institutions that create credible threats of punishment for harmful use.13
3 EcosystemAdaptation Interventions > Defence interventions
Holding fixed that the potentially harmful use of AI occurs, society can reduce the expected extent of the corresponding initial harm. In our spear phishing example, “defence” is a matter of reducing the chance that the spear phishing emails succeed in giving the cybercriminal access to the sensitive information. For example, companies could provide anti-phishing training to their staff, and implement tools to warn staff of suspected phishing emails. They could ensure that only a very small number of staff members have access to particularly sensitive information, and then only with approval from other employees.
2.3 Operations & SecurityAdaptation Interventions > Remedial interventions
Holding fixed that the initial harm occurs, society can reduce or eliminate the expected negative impact downstream of that. In our spear phishing example, this might go via reducing the extent to which national security is undermined as a result of the sale of the proprietary information to a foreign actor. For example, the company could include some false and misleading documents on its servers. Governments could reduce incentives for staff with relevant implicit knowledge to work for the foreign actor, on the grounds that implicit knowledge is often required to complement information contained in documents.
3 EcosystemSocietal Adaptation to Advanced AI
Bernardi, Jamie; Mukobi, Gabriel; Greaves, Hilary; Heim, Lennart; Anderljung, Markus (2025)
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse. However, this approach becomes less feasible as the number of developers of advanced AI grows, and impedes beneficial use-cases as well as harmful ones. In response, we urge a complementary approach: increasing societal adaptation to advanced AI, that is, reducing the expected negative impacts from a given level of diffusion of a given AI capability. We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems, illustrated with examples in election manipulation, cyberterrorism, and loss of control to AI decision-makers.
Deploy
Releasing the AI system into a production environment
Deployer
Entity that integrates and deploys the AI system for end users
Other
Risk management function not captured by the standard AIRM categories
Primary
4 Malicious Actors & Misuse