This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Changes to the model's learned parameters, architecture, or training process, including modifications to training data that affect what the model learns.
Also in AI System
Reasoning
Training methods shape model behavior through learning objectives and optimization targets.
Data-related
Documentation of data collection, annotation, maintenance practices
Dataset collection, annotation, and maintenance processes can be documented in detail, including potential unintentional misuse scenarios and corresponding recommendations for data usage [80, 175, 99]. This contributes to transparency, ensures that inherent dataset limitations are known in advance, and helps in selecting the right datasets for intended use cases.
3.2.2 Technical StandardsUse of synthetic data
Synthetic data refers to data that is not collected from the real world. It is used to train AI models as an alternative to, or augmentation of, natural data. Effective use and generation of synthetic data allows for more oversight by the trainer on the training dataset because they have more control over its statistical properties. Synthetic data can help against dataset bias by having more samples from a particular distribution or minority group. It can also help in privacy by having more samples to mask sensitive data [141].
1.1.1 Training DataModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Unable to classify
Could not be classified to a specific AIRM function