This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Red teaming, capability evaluations, adversarial testing, and performance verification.
Also in Risk & Assurance
Model developers can demonstrate that there is an acceptable “margin of safety” between the current version of the model and a plausible version with dangerous capabilities or potential system failures, whether these arise from the model itself or through scaffolding. This “margin of safety” can be tracked and evaluated based on the model’s performance on either component tasks or proxy tasks with varying levels of difficulty [44], and it is particularly relevant for generalpurpose models with emergent properties, where some of the risks, use cases, and model capabilities may be unknown even at the time of deployment. Margin of safety (also called “safety factor”) is a common practice in many industries - particularly in physical structures. It is common for this margin to be very conservative when feasible (e.g., 4+ in fasteners on critical structures). In situations where a high margin of safety is impractical, it may be supplemented by more frequent inspections and additional process controls. A lower safety factor can also be managed by adopting conservative assumptions regarding worst-case conditions.
Reasoning
Demonstrating margin of safety documents evidence supporting system safety claims and deployment readiness.
Risk Assessment
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks
Other