This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Red teaming, capability evaluations, adversarial testing, and performance verification.
Also in Risk & Assurance
Reasoning
Red teaming exercises probe systems for vulnerabilities through adversarial testing and evaluation.
Model evaluations
Red teaming for GPAI system evaluation
Red teaming refers to simulated adversarial attacks performed to identify and evaluate the model’s vulnerabilities as well as its in-domain and out-of-domain performance
2.2.2 Testing & EvaluationRed team access to the final version of a model pre-deployment
Granting red teams access to the final pre-release version of the model can help with identifying potentially dangerous model properties. These properties might not be identified if red teaming is only performed on earlier versions of the model, as late fine-tuning procedures may introduce new vulnerabilities. Red-teaming AI models before they are released to the public can reduce the model non-decomissionability risk. The model’s release can be postponed or even prevented if previously unidentified flaws are detected during the testing [65].
2.2.2 Testing & EvaluationScope red-teaming activities based on deployment context
Red-teaming activities can be tailored and compared based on the specific deployment circumstances of an AI system. This involves adapting the scope, depth, and focus of red-teaming efforts to match the intended use case, potential risks, and operational context of the AI system. Points of consideration include: • The diversity of potential users and use cases • The sensitivity and impact of the application domain • The scale of deployment and potential reach • Known vulnerabilities or concerns specific to the model or similar systems
2.2.2 Testing & EvaluationRed teaming to test the resilience of open-weights models to fine-tuning
Before the release of open-weights models, red teamers can test the resilience of safety training against fine-tuning. Safety training may be partially or fully overridden by fine-tuning intentionally (e.g., by malicious actors) or unintentionally (e.g., by fine tuning an AI model for a specific use case) [168].
2.2.2 Testing & EvaluationModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Auditing
2.2.3 Auditing & ComplianceRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks
Primary
7 AI System Safety, Failures & LimitationsOther