This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Implementation standards, guidelines, and documented best practices for AI development.
Also in Shared Infrastructure
Dataset collection, annotation, and maintenance processes can be documented in detail, including potential unintentional misuse scenarios and corresponding recommendations for data usage [80, 175, 99]. This contributes to transparency, ensures that inherent dataset limitations are known in advance, and helps in selecting the right datasets for intended use cases.
Reasoning
Documents data collection and annotation practices as part of secure development workflows and engineering processes.
Training-related
Data cleaning
Providers can filter out the training dataset via multiple layered techniques, ranging from rule-based filters to anomaly detection via data point influence or statistical anomalies of individual data points [213]. For example, a data cleaning procedure can involve the use of filename checkers to detect duplicates or wrongly formatted data, which then moves to flagging the most influential data samples from the dataset via influence functions for anomaly detectio
1.1.1 Training DataInternal data poisoning diagnosis
Providers can have an internal framework to identify what specific data poisoning attack their model may be a victim of based on a set of symptoms, such as analysis of target algorithm and architecture, perturbation scope and dimension, victim model, and data type [39]. This framework includes known defenses against the diagnosed attack, which providers can then apply to the model.
2.2.1 Risk AssessmentTamper-resistant safeguards for open-weight models
Training and implementing safeguards can improve the robustness of openweight models against modifications from fine-tuning or other methods to change the learned weights of the models, especially those aimed at removing safety restrictions. These safeguards can be resilient even after extensive fine-tuning, ensuring that the model retains its protective measures [199]
1.1.3 Capability ModificationModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Collect and Process Data
Gathering, curating, labelling, and preprocessing training data
Developer
Entity that creates, trains, or modifies the AI system
Map
Identifying and documenting AI risks, contexts, and impacts