This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Changes to the model's learned parameters, architecture, or training process, including modifications to training data that affect what the model learns.
Also in AI System
Reasoning
Model architecture design choice enabling interpretability through structural transparency mechanisms.
Model evaluations
Evaluating explainability method sensitivity to data inputs and model parameters
Model parameter and data randomization tests can be employed as sanity checks [2] to determine the relationship between a model, its inputs, and the explainability method. If an explanation is independent from or insensitive to the underlying model or the input data, it would indicate a lack of reliability of the explainability method
2.2.2 Testing & EvaluationEnforcing model output interpretability post-training
While a resulting trained model may be opaque with respect to its predictions, the final output in a system that involves the model can nonetheless be completely interpretable. For example, a neuro-symbolic system for robot navigation can use a language model to generate potential navigation plans, then have a deterministic solver simulate executing valid plans [128]. The optimal plan found is executed in the real world and is interpretable.
1.2.2 Runtime EnvironmentParaphrasing to reduce hidden information
Paraphrasing can mitigate steganography and encoded reasoning by reducing the physical size of the hidden information encoded in text [166].
1.2.1 Guardrails & FilteringTesting for erroneous or irrelevant features through concept learning
Interpretability techniques, particularly concept learning [77], can be used to test whether a model is learning erroneous features or relying on irrelevant features in its predictions. This can help identify and mitigate potential risks associated with incorrect or non-informative features influencing the model’s outputs
2.2.2 Testing & EvaluationDashboard of model properties
A dashboard [41] displays all the relevant information about the model’s internal state and the model’s physical properties to the user. It is used to ensure that the user is informed about factors that influence the behavior of the model and to ensure that the user maintains control over the model. Allowing only the user to access the dashboard can aid in information asymmetry between the user and model, thus supporting the user oversight over the model.
1.2.3 Monitoring & DetectionModification of model internal representation
Model providers can modify the internal representation of the model [135, 237] up to the granular level and in consultation with other tools that aid in understanding what the internal representation of the model is.
1.1.3 Capability ModificationModel development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Other (multiple stages)
Applies across multiple lifecycle stages
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks