This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Technical mechanisms and engineering interventions that directly modify how an AI system processes inputs, generates outputs, or operates, including changes to models, training procedures, runtime behaviors, and supporting hardware.
Achieving full control over AI systems, especially Superintelligence, is a challenging problem in the field of AI Safety
Currently, it is unknown whether the AI control problem is solvable [780], and as a result, its solvability remains a topic of ongoing debate and research. Many scholars believe that the controllability of AI could be achieved in practice [53, 197, 514, 606], though those in the "uncontrollability camp" have presented the controllability is impossible or infeasible [146, 151, 360, 781]. Despite no formal proofs or rigorous arguments have been proposed to support the safe controllability of AI, it does not deter the efforts to pursue solutions for AI capability control, aiming to achieve at least partial control.
Reasoning
Foundational research investigating theoretical solvability of AI control problem to inform safety solutions.
Red Teaming
Red teaming is a critical defence mechanism to proactively discover vulnerabilities and risks in LLMs. This process provides developers with clues and insights into the weaknesses of LLMs, paving the way for the development of more advanced and secure models. Red teaming involves meticulously crafting adversarial prompts to simulate attacks and deliberately challenge the models. These prompts can be generated through manual methods, which rely on human expertise and creativity, or automatic methods, which leverage red LLMs to systematically explore the model’s weaknesses.
2.2.2 Testing & EvaluationRed Teaming > Manual Red Teaming
Manual red-teaming approaches refer to employing crowdworkers to annotate or handcraft adversarial test cases. The underlying methodology is to develop a human-and-model-in-the-loop system, where humans are tasked to adversarially converse with language models [50, 221, 362, 532, 710, 711, 769, 770]. Specifically, workers interact with language models through a dedicated user interface that allows them to observe model predictions and construct data that exposes model failures. This process may include multiple rounds where the model is updated with the adversarial data collected thus far and redeployed; this encourages workers to craft increasingly challenging examples.
2.2.2 Testing & EvaluationRed Teaming > LLMs as Red Teamers
In the SL approach, red LLMs are fine-tuned to maximize the log-likelihood of failing, zero-shot test cases. For RL, the models are initialized from the SL-trained models and then fine-tuned using the synchronous advantage actor-critic (A2C) [505] to enhance the elicitation of harmful prompts
2.2.2 Testing & EvaluationSafety Training
Safety training aims to enhance the safety and alignment of LLMs during their development.
1.1.2 Learning ObjectivesSafety Training > Instruction Tuning
Safety training can be effectively implemented using adversarial prompts and their corresponding responsible output in an instruction-tuning framework.
1.1.2 Learning ObjectivesSafety Training > Reinforcement Learning with Human Feedback
Reinforcement Learning with Human Feedback (RLHF) is a strategy widely adopted to align with human preferences, particularly concerning ethical values.
1.1.2 Learning ObjectivesTrustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations
Chen, Chen; Gong, Xueluan; Liu, Ziyao; Jiang, Weifeng; Goh, Si Qi; Lam, Kwok-Yan (2024)
AI Safety is an emerging area of critical importance to the safe adoption and deployment of AI systems. With the rapid proliferation of AI and especially with the recent advancement of Generative AI (or GAI), the technology ecosystem behind the design, development, adoption, and deployment of AI systems has drastically changed, broadening the scope of AI Safety to address impacts on public safety and national security. In this paper, we propose a novel architectural framework for understanding and analyzing AI Safety; defining its characteristics from three perspectives: Trustworthy AI, Responsible AI, and Safe AI. We provide an extensive review of current research and advancements in AI safety from these perspectives, highlighting their key challenges and mitigation approaches. Through examples from state-of-the-art technologies, particularly Large Language Models (LLMs), we present innovative mechanism, methodologies, and techniques for designing and testing AI safety. Our goal is to promote advancement in AI safety research, and ultimately enhance people's trust in digital transformation.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks