This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Implementation standards, guidelines, and documented best practices for AI development.
Also in Shared Infrastructure
Establish and enforce cyber and information security measures for AI labs and systems to protect against various threats.
Reasoning
Organizational security measures protect AI systems through access controls, encryption, and infrastructure hardening.
Securing model parameters and other key intellectual property
Future AI systems may be highly capable and dangerous in the wrong hands. Adversaries like nation states or terrorist groups are likely to want to get their hands on these systems to enable them to pursue their own goals. Attackers might also want related intellectual property, like algorithms that improve the efficiency of training or running AI systems.
2.3.2 Access & Security ControlsSecuring model environments
To do this effectively, we’ll need to do a good job securing these environments themselves. This is because these AI systems might be very good at breaking security measures we put in place - and if they broke out, could cause catastrophic harm.
1.2.2 Runtime EnvironmentSecuring internal change management processes
It’s crucial that decisions about training or deploying AI systems at companies are appropriately authorised, and comply with other requirements around safety.
2.1.3 Policies & Procedures‘Traditional’ security concerns of AI systems
In the same way that we defend standard computer systems from attack, we’ll need to defend AI systems from similar attacks. Like standard computer systems, AI systems may be entrusted with sensitive information or control over resources.
2.3.2 Access & Security ControlsSecuring other systems
AI systems are expected to increase the volume and impact of cyberattacks in the next 2 years. They’re also expected to improve the capability available to cyber crime and state actors in 2025 and beyond. Open-weights models are likely to increase this threat because their safeguards can be cheaply removed, they can be finetuned to help cyberattackers, and they cannot be recalled. Given many powerful open-weights models have been released, it’s infeasible to ‘put the genie back in the bottle’ that would prevent the use of AI systems for cyberattacks.15 This means significant work is likely necessary to defend against the upcoming wave of cyberattacks caused by AI systems.
2.3.2 Access & Security ControlsCompute goverance
Regulate companies in the highly concentrated AI chip supply chain, given AI chips are key inputs to developing frontier AI models.
3.1.1 Legislation & PolicyData input controls
Filter data used to train AI models, e.g. don’t train your model with instructions to launch cyberattacks.
1.1.1 Training DataLicensing
Require organisations or specific training runs to be licensed by a regulatory body, similar to licensing regimes in other high-risk industries.
3.1.4 Compliance RequirementsOn-chip governance mechanisms
Make alterations to AI hardware (primarily AI chips), that enable verifying or controlling the usage of this hardware.
1.2.4 Security InfrastructureSafety cases
Develop structured arguments demonstrating that an AI system is unlikely to cause catastrophic harm, to inform decisions about training and deployment.
2.2.4 Assurance DocumentationEvaluations (aka “evals”)
Give AI systems standardised tests to assess their capabilities, which can inform the risks they might pose.
2.2.2 Testing & EvaluationThe AI regulator’s toolbox: A list of concrete AI governance practices
Jones, Adam (2024)
This article explains concrete AI governance practices people are exploring as of August 2024. Prior summaries have mapped out high-level areas of work, but rarely dive into concrete practice details. This summary explores specific practices addressing risks from advanced AI systems. Practices are grouped into categories based on where in the AI lifecycle they best fit. The primary goal of this article is to help newcomers contribute to the field of AI governance by providing a comprehensive overview of available practices.
Other (multiple stages)
Applies across multiple lifecycle stages
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management