This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Design-time architectural choices affecting safety, interpretability, and modularity.
Also in Model
Interpretability techniques can be used for finding the root causes of outputs of an AI model that reliably lead to false beliefs for its users (e.g., deceptive behavior) [180].
It is often difficult to distinguish a deceptive AI model from an honest AI model, since absence of deception and very sophisticated (hard to detect) deception may appear behaviorally similar. Interpretability techniques and tools can be used to detect whether AI model outputs arise from internal computations representing deception. This can apply in cases of purposely trained deception by the developer or if it emerges unintentionally during training. These interpretability tools can come from mechanistic interpretability, such as identification of features involved in generating the outputs, or attribution of parts of the input most important in generating the output [129].
Reasoning
Interpretability techniques targeting deception modify model architecture to enable detection of deceptive behavior patterns.
Deception
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks