This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Shared evaluation datasets, testing frameworks, and measurement tools for AI systems.
Also in Shared Infrastructure
Reasoning
Shared benchmarking practices produce adoptable evaluation artifacts enabling standardized assessment across organizations.
Benchmarking
Informative and powerful benchmarks
Developers of GPAI systems can select benchmarks that are difficult enough to be informative about the capabilities of their AI systems, and cover a large spectrum of domains in order to signal areas where the GPAI system is performing poorly [120]. Suitable benchmarks contain no label errors, are not vulnerable to being benchmark contaminants, and are often audited by independent domain experts if they contain domain-specific questions. For multimodal GPAI systems, good benchmarks cover every modality, especially the interaction of different modalities.
3.2.1 Benchmarks & EvaluationBenchmark dataset auditing
Auditing benchmark datasets allows for verification of the utility and limitations of the datasets [158]. This allows the provider to more accurately measure AI model capabilities and safety. Auditing includes the evaluation of such datasets by independent third-party organizations and the release of benchmark dataset metadata to the auditors.
2.2.3 Auditing & ComplianceDynamic benchmarking
Dynamic benchmarks are benchmarks that can be continuously updated with new human-generated data. By having one or more target models “in the loop,” examples for benchmarking can be generated with the intent of fooling these target models, or to assess if these models express an appropriate level of uncertainty [103]. As the dataset in the benchmark grows, previously benchmarked models can also be reassessed against the updated dataset to reflect its performance in a more representative manner.
3.2.1 Benchmarks & EvaluationModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks
Other