This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Staged rollout strategies, phased deployment, and tiered access approaches for production systems.
Also in Operations & Security
Gradually roll out AI systems to larger populations to monitor impacts and allow for controlled scaling or rollback if issues arise.
Developing guidance for this would include: - Suggesting recommended rollout speeds for types of AI applications, likely with regard to the safety-criticality of the system, and the change in capability being deployed. - Designing methods to select representative samples of users, particularly as timed rollouts might overlap with only some timezones or usage patterns (this may have been solved in other areas and learnings just need to be carried across). This might consider ‘safer’ users getting powerful capabilities first, as well as equity of access (so certain regions or populations are not always last to get beneficial technology). - Developing standards for monitoring AI systems during progressive rollouts, perhaps tying this together with [third-party auditing](https://adamjones.me/blog/ai-regulator-toolbox/#third-party-auditing). - Identifying appropriate responses to different monitoring results, i.e. when should companies roll back changes.
Reasoning
Staged rollout gradually deploys systems to larger populations, monitoring impacts for controlled scaling or rollback.
Compute goverance
Regulate companies in the highly concentrated AI chip supply chain, given AI chips are key inputs to developing frontier AI models.
3.1.1 Legislation & PolicyData input controls
Filter data used to train AI models, e.g. don’t train your model with instructions to launch cyberattacks.
1.1.1 Training DataLicensing
Require organisations or specific training runs to be licensed by a regulatory body, similar to licensing regimes in other high-risk industries.
3.1.4 Compliance RequirementsOn-chip governance mechanisms
Make alterations to AI hardware (primarily AI chips), that enable verifying or controlling the usage of this hardware.
1.2.4 Security InfrastructureSafety cases
Develop structured arguments demonstrating that an AI system is unlikely to cause catastrophic harm, to inform decisions about training and deployment.
2.2.4 Assurance DocumentationEvaluations (aka “evals”)
Give AI systems standardised tests to assess their capabilities, which can inform the risks they might pose.
2.2.2 Testing & EvaluationThe AI regulator’s toolbox: A list of concrete AI governance practices
Jones, Adam (2024)
This article explains concrete AI governance practices people are exploring as of August 2024. Prior summaries have mapped out high-level areas of work, but rarely dive into concrete practice details. This summary explores specific practices addressing risks from advanced AI systems. Practices are grouped into categories based on where in the AI lifecycle they best fit. The primary goal of this article is to help newcomers contribute to the field of AI governance by providing a comprehensive overview of available practices.
Deploy
Releasing the AI system into a production environment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks
Other