This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Red teaming, capability evaluations, adversarial testing, and performance verification.
Also in Risk & Assurance
Reasoning
Evaluations assess model capabilities and performance through testing and measurement activities.
Model evaluations
Frequent testing when scaling model or dataset
Testing models after significant increases in compute, data, or model parameters. Even relatively small changes to model or dataset size can introduce new properties (“emergent abilities”) and failure modes. Identifying them early can prevent the models from being released prematurely [9, 22].
2.2.2 Testing & EvaluationUsing an AI model to evaluate AI model outputs
In cases where the outputs of AI models cannot be easily evaluated, AI models can be used to evaluate their outputs or the outputs of other AI models [82, 16, 17, 91]. The evaluations can then provide a training signal to improve the original model’s performance or offer explanations of the output for human users.
2.2.2 Testing & EvaluationGPAI models explaining model outputs in a zero-sum debate game
Debate is a technique that aims to produce reliable explanations of AI model outputs that are too complicated for humans to understand, by letting two GPAI models role-playing in a debate produce an explanation in a dialogue [102].
1.1.2 Learning ObjectivesModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationModel evaluations > Auditing
2.2.3 Auditing & ComplianceRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks
Primary
7 AI System Safety, Failures & Limitations