This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Training methods that shape model behavior through objectives, feedback, and optimization targets.
Also in Model
AI model-assisted oversight can help monitor and supervise the training of increasingly capable GPAI systems, which may become difficult to oversee at scale by human supervisors during training or testing. Monitoring and supervision may become especially difficult in cases where increasingly advanced GPAIs perform near or above human level in some specialized domains, where supervision quality might fail to keep pace with capabilities improvement. The training signal may include labeled data, reward function, and user feedback on produced outputs.
Currently, there are two broad approaches to provide scalable training signals to such systems: 1. Scalable oversight: Improving of the supervisor’s capabilities to supervise, such that they can provide accurate training signals quickly and at scale [31]. For example, a debate format can be used between two GPAI systems (two instances of the same GPAI, or similarly capable systems). A Human supervisor judges the debate, making it easier to assess correct responses in domains which might otherwise require significant time investment of domain specific expertise [102]. 2. Weak-to-strong generalization: Enhance the training signals while ensuring that the enhanced signal remains faithful to the intentions of the original human-provided signal [37]. For example, a hierarchical (“bootstrapping”) oversight approach can be implemented: A series of GPAI models with increasing capabilities are used, where each model in the hierarchy provides oversight for the next more capable model. The least capable model at the base of the hierarchy is the only one directly overseen by human supervisors, as it is easier to oversee than the more capable models.
Reasoning
AI model monitors AI system runtime behavior for anomalies and safety violations.
Monitoring
Model development
2.4 Engineering & DevelopmentModel development > Data-related
1.1 ModelModel evaluations
2.2.2 Testing & EvaluationModel evaluations > General evaluations
2.2.2 Testing & EvaluationModel evaluations > Benchmarking
3.2.1 Benchmarks & EvaluationModel evaluations > Red teaming
2.2.2 Testing & EvaluationRisk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Gipiškis, Rokas; San Joaquin, Ayrton; Chin, Ze Shen; Regenfuß, Adrian; Gil, Ariel; Holtman, Koen (2024)
Organizations and governments that develop, deploy, use, and govern AI must coordinate on effective risk mitigation. However, the landscape of AI risk mitigation frameworks is fragmented, uses inconsistent terminology, and has gaps in coverage. This paper introduces a preliminary AI Risk Mitigation Taxonomy to organize AI risk mitigations and provide a common frame of reference. The Taxonomy was developed through a rapid evidence scan of 13 AI risk mitigation frameworks published between 2023-2025, which were extracted into a living database of 831 distinct AI risk mitigations.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Measure
Quantifying, testing, and monitoring identified AI risks