This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Shared evaluation datasets, testing frameworks, and measurement tools for AI systems.
Also in Shared Infrastructure
Dynamic benchmarks are benchmarks that can be continuously updated with new human-generated data. By having one or more target models “in the loop,” examples for benchmarking can be generated with the intent of fooling these target models, or to assess if these models express an appropriate level of uncertainty [103]. As the dataset in the benchmark grows, previously benchmarked models can also be reassessed against the updated dataset to reflect its performance in a more representative manner.
Examples of such dynamic benchmarks include DynaSent [153] for sentiment analysis, LFTW [208] for hate speech, and Human-Adversarial VQA [182] for images
Reasoning
Shared evaluation dataset enabling standardized AI system capability assessment across organizations.
Good benchmarking practices
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks