This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Training methods that shape model behavior through objectives, feedback, and optimization targets.
Also in Model
Debate is a technique that aims to produce reliable explanations of AI model outputs that are too complicated for humans to understand, by letting two GPAI models role-playing in a debate produce an explanation in a dialogue [102].
For example, an AI model may produce an output which is time-consuming for humans to verify as doing so may require going through extensive sources. Given such an output, the developer can use two natural language AI systems in an adversarial two-player setup to explain the output. These two natural language AI systems can be copies of the AI model that produced the output. In this setup, one AI system gives a short explanation of the output. The second AI system responds to this explanation with a counter-explanation or an argument why the first explanation was not correct. This continues for a fixed number of turns, with the two AI systems pointing out inconsistencies in each others’ explanations. After this sequence of statements, a human or an AI judge evaluates the explanations of both AI systems to determine which one is more convincing. The results of this debate can then be used for further training of the models via reinforcement learning, where outputs that are more truthful and convincing arguments and explanations are positively reinforced, while misleading or false outputs are negatively reinforced.
Reasoning
Zero-sum debate game shapes model behavior through structured optimization objectives during training.
General evaluations
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks