This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Unclassifiable mitigations.
Reasoning
Mitigation name "Privacy" lacks definition and evidence; insufficient information to identify focal activity or where it occurs.
Impacts of AI
Knowledge unlearning techniques
Knowledge unlearning techniques allow specific information to be “forgotten” without the need for retraining the entire model, preserving its general capabilities. These techniques can be used to reduce privacy risks and protect against copyrighted or harmful content [96, 188].
1.1.3 Capability ModificationDifferential privacy
Differential privacy techniques [8] can be used to protect users’ privacy by ensuring that sensitive information is not leaked from a training dataset, even after thorough statistical analysis. With differential privacy, noise is added to the dataset or the model’s output in such a way that one cannot deduce the presence or absence of a particular data point within the dataset. This provides individuals with plausible deniability and prevents their information from being exposed.
1.2.4 Security InfrastructureQuantifying privacy risks of AI models
Measuring privacy risks of an AI model allows the provider and user to calibrate their expectations on where the model can be applied, and it allows them to take the necessary steps to reduce such risks. For example, some metrics include: • Success rate of membership inference attacks [186] - Measures the rate that an attack correctly predicts a given record is part of the training dataset used to train a given AI model. • Discoverable memorization [38] - Theoretical upper-bound of the amount of training data that a given model memorizes. Assuming full knowledge of the training data, it measures the percentage of items that, for a given incomplete data point, a model outputs the remaining (memorized) part.
2.2.2 Testing & EvaluationModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Unable to classify
Could not be classified to a specific lifecycle stage
Unable to classify
Could not be classified to a specific actor type
Unable to classify
Could not be classified to a specific AIRM function
Other