This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
User vetting, access restrictions, encryption, and infrastructure security for deployed systems.
Also in Operations & Security
Identify, verify, understand and risk assess users of AI systems. In conjunction with other interventions, this could be used to restrict access to potentially dangerous capabilities.
Financial services firms are expected to perform customer due diligence (CDD)[7](https://adamjones.me/blog/ai-regulator-toolbox/#user-content-fn-7) to prevent financial crime. This usually involves: - **Identifying** customers: usually asking customers for details that could uniquely identify them. This might also include asking for details about corporate structures like the ultimate [beneficial owners](https://en.wikipedia.org/wiki/Beneficial_ownership) of a company. - **Verifying**[8](https://adamjones.me/blog/ai-regulator-toolbox/#user-content-fn-8) - - customer identities: checking the person really exists and is who they say they are. This might involve reviewing ID documents and a selfie video. - **Understanding** customers: understanding who customers are, and how customers will use your services. This often combines: - asking customers for this info - pulling in structured data from third parties (such as registers of companies like [Companies House](https://find-and-update.company-information.service.gov.uk/), fraud databases like [Cifas](https://www.cifas.org.uk/) and credit reference agencies like [Experian](https://www.experian.co.uk/), [Equifax](https://www.equifax.co.uk/) and [TransUnion](https://www.transunion.co.uk/)) - reviewing a customer’s online presence or previous interactions with the firm - **Risk assessing** customers: evaluating the information collected about the customer, and developing a risk profile. This might then determine what kinds of activity would be considered suspicious. This is usually done on a regular basis, and is closely linked to ongoing monitoring of customer behaviours. If something very suspicious is flagged, the firm reports this to law enforcement as a [suspicious activity report](https://www.nationalcrimeagency.gov.uk/what-we-do/crime-threats/money-laundering-and-illicit-finance/suspicious-activity-reports).
Reasoning
Vets users through identity verification and risk assessment to control system access.
Compute goverance
Regulate companies in the highly concentrated AI chip supply chain, given AI chips are key inputs to developing frontier AI models.
3.1.1 Legislation & PolicyData input controls
Filter data used to train AI models, e.g. don’t train your model with instructions to launch cyberattacks.
1.1.1 Training DataLicensing
Require organisations or specific training runs to be licensed by a regulatory body, similar to licensing regimes in other high-risk industries.
3.1.4 Compliance RequirementsOn-chip governance mechanisms
Make alterations to AI hardware (primarily AI chips), that enable verifying or controlling the usage of this hardware.
1.2.4 Security InfrastructureSafety cases
Develop structured arguments demonstrating that an AI system is unlikely to cause catastrophic harm, to inform decisions about training and deployment.
2.2.4 Assurance DocumentationEvaluations (aka “evals”)
Give AI systems standardised tests to assess their capabilities, which can inform the risks they might pose.
2.2.2 Testing & EvaluationThe AI regulator’s toolbox: A list of concrete AI governance practices
Jones, Adam (2024)
This article explains concrete AI governance practices people are exploring as of August 2024. Prior summaries have mapped out high-level areas of work, but rarely dive into concrete practice details. This summary explores specific practices addressing risks from advanced AI systems. Practices are grouped into categories based on where in the AI lifecycle they best fit. The primary goal of this article is to help newcomers contribute to the field of AI governance by providing a comprehensive overview of available practices.
Deploy
Releasing the AI system into a production environment
Deployer
Entity that integrates and deploys the AI system for end users
Map
Identifying and documenting AI risks, contexts, and impacts
Primary
4 Malicious Actors & Misuse