This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Red teaming, capability evaluations, adversarial testing, and performance verification.
Also in Risk & Assurance
Benchmarks, once created, are inexpensive to apply but may be less informative than red teaming. One reason is that sensitive data (e.g., relating to CBRNrelated capabilities) cannot be included in the public questions and answers of benchmarks. On the other hand, red teaming can be more accurate given participants with diverse attack strategies, but it requires more resources to execute than benchmarking. If there is a correlation between benchmarking and redteaming scores, then employing frequent benchmarking during the development of the model can identify arising vulnerabilities and inform the developers when more thorough red teaming is required [21]. Benchmarks can act as early warning signs of a larger issue, and red teaming can then be employed to investigate the severity and extent of such an issue.
Reasoning
Benchmarking identifies performance gaps triggering red teaming evaluation activities.
Benchmarking
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks