This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cross-organization coordination mechanisms, information sharing, and collaborative monitoring.
Also in Voluntary & Cooperative
We put forward a framework to analyze the functions and relationships between stakeholders in AI governance (see Figure 13). In this framework, we outline three main entities. Government Agencies oversee AI policies using legislative, judicial, and enforcement powers, as well as engage in international cooperation. Industry and AGI Labs research and deploy AI technologies, making them subjects of the governance framework, while proposing techniques to govern themselves and affecting governance policy. Third Parties, including academia, Non-Governmental Organizations (NGOs), and Non-Profit Organizations (NPOs), perform not only auditing on corporate governance, AI systems, and their applications but also assist the government in policy-making. Proposals have been made about specific principles for a multi-stakeholder AI governance landscape. N
Reasoning
Mitigation name and evidence placeholders prevent identifying focal activity or implementation location.
RL/PbRL/IRL/Imitation Learning
1.1.2 Learning ObjectivesRLHF
RLHF expands upon PbRL within the domain of DRL (Christiano et al., 2017), aiming to more closely align complex AI systems with human preferences (OpenAI, 2023b). Its principal advantage is that it capitalizes on humans being better at judging appropriate behavior than giving demonstrations or manually setting rewards. This approach has gained significant traction, particularly in fine-tuning LLMs (Ouyang et al., 2022; OpenAI, 2023a; Touvron et al., 2023).
1.1.2 Learning ObjectivesRLxF
Building on the RLHF paradigm, we introduce RLxF as a fundamental framework for scalable oversight, aiming to enhance feedback efficiency and quality and expand human feedback for more complex tasks. This enhances RLHF by incorporating AI components (Fernandes et al., 2023). The x in RLxF signifies a blend of AI and humans.
1.1.2 Learning ObjectivesIterated Distillation and Amplification IDA
Iterated Distillation and Amplification (IDA) introduces a framework for constructing scalable oversight through iterative collaboration between humans and AIs (Christiano et al., 2018). The process commences with an initial agent, denoted as A[0], which mirrors the decision-making of a human, H. A[0] undergoes training using a potent technique that equips it with near-human-level proficiency (the distillation step); Then, collaborative interaction between H and multiple A[0] instances leads to the creation of an enhanced agent, A[1] (the amplification step).
1.1.2 Learning ObjectivesRecursive Reward Modeling RRM
Recursive Reward Modeling (RRM) (Leike et al., 2018) seeks to broaden the application of reward modeling to much more intricate tasks. The central insight of RRM is the recursive use of already trained agents At−1 to provide feedback by performing reward learning on an amplified version of itself for the training of successive agents At on more complex tasks. The A0 is trained via fundamental reward modeling (learned from pure human feedback). This approach is not only influenced by human feedback but also by the model’s own assessments of what constitutes a rewarding outcome.
1.1.2 Learning ObjectivesDebate
Debate involves two agents presenting answers and statements to assist human judges in their decision-making (Irving et al., 2018), as delineated in Algorithm 3. This is a zero-sum debate game where agents try to identify each other’s shortcomings while striving to gain higher trust from human judges, and it can be a potential approach to constructing scalable oversight.
1.1.2 Learning ObjectivesAI Alignment: A Comprehensive Survey
Ji, Jiaming; Qiu, Tianyi; Chen, Boyuan; Zhang, Borong; Lou, Hantao; Wang, Kaile; Duan, Yawen; He, Zhonghao; Vierling, Lukas; Hong, Donghai; Zhou, Jiayi; Zhang, Zhaowei; Zeng, Fanzhi; Dai, Juntao; Pan, Xuehai; Ng, Kwan Yee; O'Gara, Aidan; Xu, Hua; Tse, Brian; Fu, Jie; McAleer, Stephen; Yang, Yaodong; Wang, Yizhou; Zhu, Song-Chun; Guo, Yike; Gao, Wen (2023)
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. This survey provides a broad overview of the research progress and challenges in the hallucination problem in NLG.
Operate and Monitor
Running, maintaining, and monitoring the AI system post-deployment
Governance Actor
Regulator, standards body, or oversight entity shaping AI policy
Govern
Policies, processes, and accountability structures for AI risk management
Primary
6.5 Governance failure