This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Modifications to training data composition, quality, and filtering that affect what the model learns.
Also in Model
Cooperative AI (Dafoe et al., 2020, 2021) aims to address uncooperative and collectively harmful behaviors from AI systems (see §1.1.2). The lack of cooperative capabilities in AI systems can be seen as a form of failure under distribution shift – systems are trained in single-agent settings that are qualitatively different from the real world, which could be massively multi-agent. This difference is indeed a difference in data distribution since the presence of other agents in the environment qualitatively alters the environmental state transition dynamics, leading to changes in the joint distribution of observations and rewards.
RL/PbRL/IRL/Imitation Learning
1.1.2 Learning ObjectivesRLHF
RLHF expands upon PbRL within the domain of DRL (Christiano et al., 2017), aiming to more closely align complex AI systems with human preferences (OpenAI, 2023b). Its principal advantage is that it capitalizes on humans being better at judging appropriate behavior than giving demonstrations or manually setting rewards. This approach has gained significant traction, particularly in fine-tuning LLMs (Ouyang et al., 2022; OpenAI, 2023a; Touvron et al., 2023).
1.1.2 Learning ObjectivesRLxF
Building on the RLHF paradigm, we introduce RLxF as a fundamental framework for scalable oversight, aiming to enhance feedback efficiency and quality and expand human feedback for more complex tasks. This enhances RLHF by incorporating AI components (Fernandes et al., 2023). The x in RLxF signifies a blend of AI and humans.
1.1.2 Learning ObjectivesIterated Distillation and Amplification IDA
Iterated Distillation and Amplification (IDA) introduces a framework for constructing scalable oversight through iterative collaboration between humans and AIs (Christiano et al., 2018). The process commences with an initial agent, denoted as A[0], which mirrors the decision-making of a human, H. A[0] undergoes training using a potent technique that equips it with near-human-level proficiency (the distillation step); Then, collaborative interaction between H and multiple A[0] instances leads to the creation of an enhanced agent, A[1] (the amplification step).
1.1.2 Learning ObjectivesRecursive Reward Modeling RRM
Recursive Reward Modeling (RRM) (Leike et al., 2018) seeks to broaden the application of reward modeling to much more intricate tasks. The central insight of RRM is the recursive use of already trained agents At−1 to provide feedback by performing reward learning on an amplified version of itself for the training of successive agents At on more complex tasks. The A0 is trained via fundamental reward modeling (learned from pure human feedback). This approach is not only influenced by human feedback but also by the model’s own assessments of what constitutes a rewarding outcome.
1.1.2 Learning ObjectivesDebate
Debate involves two agents presenting answers and statements to assist human judges in their decision-making (Irving et al., 2018), as delineated in Algorithm 3. This is a zero-sum debate game where agents try to identify each other’s shortcomings while striving to gain higher trust from human judges, and it can be a potential approach to constructing scalable oversight.
1.1.2 Learning ObjectivesAI Alignment: A Comprehensive Survey
Ji, Jiaming; Qiu, Tianyi; Chen, Boyuan; Zhang, Borong; Lou, Hantao; Wang, Kaile; Duan, Yawen; He, Zhonghao; Vierling, Lukas; Hong, Donghai; Zhou, Jiayi; Zhang, Zhaowei; Zeng, Fanzhi; Dai, Juntao; Pan, Xuehai; Ng, Kwan Yee; O'Gara, Aidan; Xu, Hua; Tse, Brian; Fu, Jie; McAleer, Stephen; Yang, Yaodong; Wang, Yizhou; Zhu, Song-Chun; Guo, Yike; Gao, Wen (2023)
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. This survey provides a broad overview of the research progress and challenges in the hallucination problem in NLG.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks
Primary
7.6 Multi-agent risks