This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Changes to the model's learned parameters, architecture, or training process, including modifications to training data that affect what the model learns.
Also in AI System
Providers of AI models can apply techniques to reduce the biases of their models. Current debiasing methods focus on three main types of bias: • Racial and religious bias - Stereotypes based on religious beliefs or racial beliefs. • Gender bias - Stereotypes tied to gender roles and expectations. • Political and cultural bias - Propagation of dominant ideologies or extremist attitudes. Debiasing methods can be categorized based on their application during AI development: • Data pre-processing - Removing or correcting unwanted and biased data, and augmenting quality data to offset data bias, such as rebalancing datasets with counterfactual data augmentation. • During training - Intervening on the training dynamics of the AI model, such as introducing debiasing terms in the objective function or by negatively reinforcing biased outputs. • Post-training - Applying techniques to correct a trained but biased model, such as modifying the embedding space.
Reasoning
Debiasing methods modify training data composition and filtering to remove or reduce bias patterns the model learns.
Bias
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Other (multiple stages)
Applies across multiple lifecycle stages
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks