This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Training methods that shape model behavior through objectives, feedback, and optimization targets.
Also in Model
RLHF expands upon PbRL within the domain of DRL (Christiano et al., 2017), aiming to more closely align complex AI systems with human preferences (OpenAI, 2023b). Its principal advantage is that it capitalizes on humans being better at judging appropriate behavior than giving demonstrations or manually setting rewards. This approach has gained significant traction, particularly in fine-tuning LLMs (Ouyang et al., 2022; OpenAI, 2023a; Touvron et al., 2023).
Reasoning
RLHF shapes model behavior through human preference feedback optimization during training.
RL/PbRL/IRL/Imitation Learning
1.1.2 Learning ObjectivesRLxF
Building on the RLHF paradigm, we introduce RLxF as a fundamental framework for scalable oversight, aiming to enhance feedback efficiency and quality and expand human feedback for more complex tasks. This enhances RLHF by incorporating AI components (Fernandes et al., 2023). The x in RLxF signifies a blend of AI and humans.
1.1.2 Learning ObjectivesIterated Distillation and Amplification IDA
Iterated Distillation and Amplification (IDA) introduces a framework for constructing scalable oversight through iterative collaboration between humans and AIs (Christiano et al., 2018). The process commences with an initial agent, denoted as A[0], which mirrors the decision-making of a human, H. A[0] undergoes training using a potent technique that equips it with near-human-level proficiency (the distillation step); Then, collaborative interaction between H and multiple A[0] instances leads to the creation of an enhanced agent, A[1] (the amplification step).
1.1.2 Learning ObjectivesRecursive Reward Modeling RRM
Recursive Reward Modeling (RRM) (Leike et al., 2018) seeks to broaden the application of reward modeling to much more intricate tasks. The central insight of RRM is the recursive use of already trained agents At−1 to provide feedback by performing reward learning on an amplified version of itself for the training of successive agents At on more complex tasks. The A0 is trained via fundamental reward modeling (learned from pure human feedback). This approach is not only influenced by human feedback but also by the model’s own assessments of what constitutes a rewarding outcome.
1.1.2 Learning ObjectivesDebate
Debate involves two agents presenting answers and statements to assist human judges in their decision-making (Irving et al., 2018), as delineated in Algorithm 3. This is a zero-sum debate game where agents try to identify each other’s shortcomings while striving to gain higher trust from human judges, and it can be a potential approach to constructing scalable oversight.
1.1.2 Learning ObjectivesCooperative Inverse Reinforcement CIRL
The framework of Cooperative Inverse Reinforcement Learning (CIRL), however, unifies control and learning from feedback and models human feedback providers as fellow agents in the same environment. It approaches the scalable oversight problem not by strengthening oversight but by trying to eliminate the incentives for AI systems to game oversight, putting humans giving feedback and the AI system in cooperative rather than adversarial positions (Shah et al., 2020). In the CIRL paradigm, the AI system collaborates with humans to achieve the human’s true goal rather than unilaterally optimizing for human preferences.
1.1.2 Learning ObjectivesAI Alignment: A Comprehensive Survey
Ji, Jiaming; Qiu, Tianyi; Chen, Boyuan; Zhang, Borong; Lou, Hantao; Wang, Kaile; Duan, Yawen; He, Zhonghao; Vierling, Lukas; Hong, Donghai; Zhou, Jiayi; Zhang, Zhaowei; Zeng, Fanzhi; Dai, Juntao; Pan, Xuehai; Ng, Kwan Yee; O'Gara, Aidan; Xu, Hua; Tse, Brian; Fu, Jie; McAleer, Stephen; Yang, Yaodong; Wang, Yizhou; Zhu, Song-Chun; Guo, Yike; Gao, Wen (2023)
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. This survey provides a broad overview of the research progress and challenges in the hallucination problem in NLG.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks