This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
External environmental factors, multi-stakeholder mechanisms, and shared resources that shape conditions for AI development and use, operating beyond the control of any single organization.
Society can reduce the expected extent of the potentially harmful use of AI, making the problematic actions in question more difficult to engage in, or more costly compared to relevant alternatives.12 One can make it more difficult for a given instance of potentially harmful AI activity to occur by limiting the user’s or the AI system’s access to key resources that are required for the activity in question, or to key actuators that are required for completion of the intended action. (In the spear phishing example we introduced in Section 3.1: relevant companies could make it harder for cybercriminals to access the names and contact details of their staff.) One can make potentially harmful uses of AI more ex ante costly by building institutions that create credible threats of punishment for harmful use.13
[4.1] Election Manipulation with Generative AI [4.1.2] Avoidance: Governments can deter election interference by criminalising it (Lerner 2023), subject to requirements of free speech (Toney 2024). Social media platforms can require some ”proof of humanity” for creation of user accounts, making it more challenging for bot accounts to spread disinformation (Shoemaker 2024). [4.2] AI-Enabled Cyberterrorism Attacks on Critical Infrastructure [4.2.2] Avoidance: Robust international agreements against cyberterrorism could facilitate global cooperation in detecting, tracking, and prosecuting cyberterrorists (Peters and Jordan 2020). Enhancing state abilities to detect cyber intrusions with access to critical infrastructure systems could preemptively identify and neutralise threats (Critical Infrastructure Security Agency 2013), especially including advanced persistent threats (Critical Infrastructure Security Agency 2024). [4.3] Loss of Control to AI Decision-Makers [4.3.2] Avoidance: Regulation could limit decision-making automation in certain high-stakes industries or government roles until these systems have been proved trustworthy (Coy 2024), similar to the existing regime of requiring trials for new pharmaceuticals (U.S. Food and Drug Administration 2017).
Reasoning
Governments criminalize harmful AI use and create credible punishment threats to deter activity.
Capability-Modifying Interventions
Capability-modifying interventions intervene at points immediately preceding the “development” and “diffusion” steps
1 AI SystemCapability-Modifying Interventions > Development interventions
Society can affect which AI capabilities are developed. For example, companies could refrain from developing systems that have certain potentially harmful capabilities, or make systems that are more resistant to jailbreaking, have higher chances of refusing potentially harmful requests, or have outputs that can be more easily identifiable as AI-generated.
1.1 ModelCapability-Modifying Interventions > Diffusion interventions
Society can affect which AI systems are made available, to whom, and with what degrees of access. For example, companies can employ “staged release”: gradually making the system more widely available (Solaiman 2023). They could make potentially risky models available only via an API, allowing them to implement secure safeguards, such as watermarking or content provenance tags (Shevlane 2022). They could enforce terms of service policies, removing access from customers who use the system in prohibited ways.
2.3.1 Deployment ManagementAdaptation Interventions
Adaptation interventions, the primary focus of this paper, intervene at later stages in the causal chain. Such interventions immediately precede the “use”, “initial harm” or “impact” stages of that chain. (Occasionally, a specific intervention can affect multiple points along the causal chain.)
3 EcosystemAdaptation Interventions > Defence interventions
Holding fixed that the potentially harmful use of AI occurs, society can reduce the expected extent of the corresponding initial harm. In our spear phishing example, “defence” is a matter of reducing the chance that the spear phishing emails succeed in giving the cybercriminal access to the sensitive information. For example, companies could provide anti-phishing training to their staff, and implement tools to warn staff of suspected phishing emails. They could ensure that only a very small number of staff members have access to particularly sensitive information, and then only with approval from other employees.
2.3 Operations & SecurityAdaptation Interventions > Remedial interventions
Holding fixed that the initial harm occurs, society can reduce or eliminate the expected negative impact downstream of that. In our spear phishing example, this might go via reducing the extent to which national security is undermined as a result of the sale of the proprietary information to a foreign actor. For example, the company could include some false and misleading documents on its servers. Governments could reduce incentives for staff with relevant implicit knowledge to work for the foreign actor, on the grounds that implicit knowledge is often required to complement information contained in documents.
3 EcosystemSocietal Adaptation to Advanced AI
Bernardi, Jamie; Mukobi, Gabriel; Greaves, Hilary; Heim, Lennart; Anderljung, Markus (2025)
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse. However, this approach becomes less feasible as the number of developers of advanced AI grows, and impedes beneficial use-cases as well as harmful ones. In response, we urge a complementary approach: increasing societal adaptation to advanced AI, that is, reducing the expected negative impacts from a given level of diffusion of a given AI capability. We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems, illustrated with examples in election manipulation, cyberterrorism, and loss of control to AI decision-makers.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Other
Actor type not captured by the standard categories
Other
Risk management function not captured by the standard AIRM categories