This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Penalties, sanctions, taxes, incentives, and enforcement powers backed by state authority.
Also in Legal & Regulatory
Penalties or tax breaks to incentivise ‘good’ behaviours (like investing in AI safety research).
Tax penalties or other government fines can be used to encourage compliance: - Significant penalties or fines for attributable non-compliance with AI regulations - Levies on AI companies for systemic harms that are not directly attributable to AI systems, or for companies that go bust (similar to the [Motor Insurers’ Bureau](https://www.mib.org.uk/making-a-claim/what-we-do/) which covers unidentifiable or uninsured drivers, or [the FSCS](https://www.fscs.org.uk/) which covers financial firms that have gone out of business) Tax incentives can be used to encourage AI companies and researchers to prioritise safety research, or building safe AI systems. This might include: - Tax deductions for investments in AI safety research - Reduced [robot tax](https://adamjones.me/blog/ai-regulator-toolbox/#robot-tax) for systems using designs that pose less risk of catastrophic outcomes - [Accelerated depreciation](https://www.investopedia.com/terms/a/accelerateddepreciation.asp) for hardware used in AI safety testing and evaluation
Reasoning
Government-imposed financial penalties and tax incentives enforce compliance with AI regulations through state authority.
Compute goverance
Regulate companies in the highly concentrated AI chip supply chain, given AI chips are key inputs to developing frontier AI models.
3.1.1 Legislation & PolicyData input controls
Filter data used to train AI models, e.g. don’t train your model with instructions to launch cyberattacks.
1.1.1 Training DataLicensing
Require organisations or specific training runs to be licensed by a regulatory body, similar to licensing regimes in other high-risk industries.
3.1.4 Compliance RequirementsOn-chip governance mechanisms
Make alterations to AI hardware (primarily AI chips), that enable verifying or controlling the usage of this hardware.
1.2.4 Security InfrastructureSafety cases
Develop structured arguments demonstrating that an AI system is unlikely to cause catastrophic harm, to inform decisions about training and deployment.
2.2.4 Assurance DocumentationEvaluations (aka “evals”)
Give AI systems standardised tests to assess their capabilities, which can inform the risks they might pose.
2.2.2 Testing & EvaluationThe AI regulator’s toolbox: A list of concrete AI governance practices
Jones, Adam (2024)
This article explains concrete AI governance practices people are exploring as of August 2024. Prior summaries have mapped out high-level areas of work, but rarely dive into concrete practice details. This summary explores specific practices addressing risks from advanced AI systems. Practices are grouped into categories based on where in the AI lifecycle they best fit. The primary goal of this article is to help newcomers contribute to the field of AI governance by providing a comprehensive overview of available practices.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management