This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
External environmental factors, multi-stakeholder mechanisms, and shared resources that shape conditions for AI development and use, operating beyond the control of any single organization.
Holding fixed that the initial harm occurs, society can reduce or eliminate the expected negative impact downstream of that. In our spear phishing example, this might go via reducing the extent to which national security is undermined as a result of the sale of the proprietary information to a foreign actor. For example, the company could include some false and misleading documents on its servers. Governments could reduce incentives for staff with relevant implicit knowledge to work for the foreign actor, on the grounds that implicit knowledge is often required to complement information contained in documents.
[4.1] Election Manipulation with Generative AI [4.1.2] Remedy: In extreme circumstances, given robust evidence of election manipulation, governments could rerun elections, as has been done in Germany (Martin, Hallam, and Hubenko 2024), India (Agarwala 2024), Malawi (Kell 2020) and Serbia (Gec 2024), though caution is required (Huefner 2007). Impartial and transparent investigations into the integrity of the electoral process can build public trust to avoid secondary harms from a disgruntled public. [4.2] AI-Enabled Cyberterrorism Attacks on Critical Infrastructure [4.2.2] Remedy: Appropriate compensation schemes can reduce harm by spreading the costs associated with cyberattacks. Decoupled and redundant critical infrastructure, such as backup power for hospitals (Davoudi 2015), can ensure continuity of service. Cities can prepare to rapidly restore attacked infrastructure—for example, via planning and drills for rebooting the power grid or repairing compromised digital systems. [4.3] Loss of Control to AI Decision-Makers [4.3.2] Remedy: Government agencies could “bust” harmful AI decision-makers in critical roles, such as corporate executives, disempowering them similar to the way in which antitrust agencies bust corporate decisions that undermine consumer welfare. Shared incident reporting mechanisms could help institutions piece together diffuse patterns of failure (McGregor 2021).
Reasoning
Incident response plans and remedial actions to mitigate downstream harms from AI-enabled attacks and failures.
Capability-Modifying Interventions
Capability-modifying interventions intervene at points immediately preceding the “development” and “diffusion” steps
1 AI SystemCapability-Modifying Interventions > Development interventions
Society can affect which AI capabilities are developed. For example, companies could refrain from developing systems that have certain potentially harmful capabilities, or make systems that are more resistant to jailbreaking, have higher chances of refusing potentially harmful requests, or have outputs that can be more easily identifiable as AI-generated.
1.1 ModelCapability-Modifying Interventions > Diffusion interventions
Society can affect which AI systems are made available, to whom, and with what degrees of access. For example, companies can employ “staged release”: gradually making the system more widely available (Solaiman 2023). They could make potentially risky models available only via an API, allowing them to implement secure safeguards, such as watermarking or content provenance tags (Shevlane 2022). They could enforce terms of service policies, removing access from customers who use the system in prohibited ways.
2.3.1 Deployment ManagementAdaptation Interventions
Adaptation interventions, the primary focus of this paper, intervene at later stages in the causal chain. Such interventions immediately precede the “use”, “initial harm” or “impact” stages of that chain. (Occasionally, a specific intervention can affect multiple points along the causal chain.)
3 EcosystemAdaptation Interventions > Avoidance interventions
Society can reduce the expected extent of the potentially harmful use of AI, making the problematic actions in question more difficult to engage in, or more costly compared to relevant alternatives.12 One can make it more difficult for a given instance of potentially harmful AI activity to occur by limiting the user’s or the AI system’s access to key resources that are required for the activity in question, or to key actuators that are required for completion of the intended action. (In the spear phishing example we introduced in Section 3.1: relevant companies could make it harder for cybercriminals to access the names and contact details of their staff.) One can make potentially harmful uses of AI more ex ante costly by building institutions that create credible threats of punishment for harmful use.13
3 EcosystemAdaptation Interventions > Defence interventions
Holding fixed that the potentially harmful use of AI occurs, society can reduce the expected extent of the corresponding initial harm. In our spear phishing example, “defence” is a matter of reducing the chance that the spear phishing emails succeed in giving the cybercriminal access to the sensitive information. For example, companies could provide anti-phishing training to their staff, and implement tools to warn staff of suspected phishing emails. They could ensure that only a very small number of staff members have access to particularly sensitive information, and then only with approval from other employees.
2.3 Operations & SecuritySocietal Adaptation to Advanced AI
Bernardi, Jamie; Mukobi, Gabriel; Greaves, Hilary; Heim, Lennart; Anderljung, Markus (2025)
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse. However, this approach becomes less feasible as the number of developers of advanced AI grows, and impedes beneficial use-cases as well as harmful ones. In response, we urge a complementary approach: increasing societal adaptation to advanced AI, that is, reducing the expected negative impacts from a given level of diffusion of a given AI capability. We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems, illustrated with examples in election manipulation, cyberterrorism, and loss of control to AI decision-makers.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Other
Risk management function not captured by the standard AIRM categories
Primary
4 Malicious Actors & Misuse