This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Containment, isolation, and control mechanisms for system execution.
Also in Non-Model
Deployers of an AI system can restrict its permissions to a whitelisted set of predetermined options, such that all options not on the whitelist are not accessible to the AI [200, 146]. The entries on the whitelist can be chosen to be as small as possible for the AI system to fulfill its intended purpose, to reduce the attack surface of external attackers, and to decrease the probability that the AI system accidentally takes actions with large unintended side-effects
For example, such whitelisting can apply to network connections, execution of other programs, access to knowledge bases, and the action space of the AI system. It can be implemented through running the AI system on an OS-level virtualization, on networks behind a firewall, and in extreme cases on air-gapped machines
Reasoning
Restricts system access to authorized users through permission controls, an operational security practice.
Cybersecurity
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Deploy
Releasing the AI system into a production environment
Deployer
Entity that integrates and deploys the AI system for end users
Manage
Prioritising, responding to, and mitigating AI risks