This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Laws, legal frameworks, and binding policy instruments governing AI development and use.
Also in Legal & Regulatory
Turn policy ideas into specific rules that can be legally enforced, and usually some powers to enforce them.
As well as selecting policies to implement, this also includes: - Deciding whether to legislate, use existing legal powers or use non-legal powers (negotiating voluntary commitments) - Fleshing out practices into things that can be implemented, e.g. if we want [responsible disclosure programmes](https://adamjones.me/blog/ai-regulator-toolbox/#responsible-disclosure-programmes) to be required by law we need to legally define all these concepts, what specific requirements or offences are, what penalties for non-compliance are, and give legal powers to a regulator to investigate, enforce and support compliance (and update any existing laws where they might conflict). - Deciding how specific legislation and regulation should be: often with tech regulation there’s a balance between covering things in the future, and being able to be specific now (which is useful both for regulators so it’s clear when they can and can’t enforce things, and also for AI companies so they know what they can and can’t do). - Keeping legislation up to date. For example, tasking a body to review the legislation after a period of time, and providing mechanisms for easier updates (such as pointing to standards which can be updated, or allowing the executive to create secondary legislation such as statutory instruments). - Harmonising legislation and regulations across jurisdictions, to support compliance and enforcement activities. - Minimising [negative side effects of regulation](https://www.lesswrong.com/posts/6untaSPpsocmkS7Z3/ways-i-expect-ai-regulation-to-increase-extinction-risk).
Reasoning
Developing enforceable legal rules and regulatory frameworks for AI systems requires state authority and legislative action.
Policy implementation
Compute goverance
Regulate companies in the highly concentrated AI chip supply chain, given AI chips are key inputs to developing frontier AI models.
3.1.1 Legislation & PolicyData input controls
Filter data used to train AI models, e.g. don’t train your model with instructions to launch cyberattacks.
1.1.1 Training DataLicensing
Require organisations or specific training runs to be licensed by a regulatory body, similar to licensing regimes in other high-risk industries.
3.1.4 Compliance RequirementsOn-chip governance mechanisms
Make alterations to AI hardware (primarily AI chips), that enable verifying or controlling the usage of this hardware.
1.2.4 Security InfrastructureSafety cases
Develop structured arguments demonstrating that an AI system is unlikely to cause catastrophic harm, to inform decisions about training and deployment.
2.2.4 Assurance DocumentationEvaluations (aka “evals”)
Give AI systems standardised tests to assess their capabilities, which can inform the risks they might pose.
2.2.2 Testing & EvaluationThe AI regulator’s toolbox: A list of concrete AI governance practices
Jones, Adam (2024)
This article explains concrete AI governance practices people are exploring as of August 2024. Prior summaries have mapped out high-level areas of work, but rarely dive into concrete practice details. This summary explores specific practices addressing risks from advanced AI systems. Practices are grouped into categories based on where in the AI lifecycle they best fit. The primary goal of this article is to help newcomers contribute to the field of AI governance by providing a comprehensive overview of available practices.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure