This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Containment, isolation, and control mechanisms for system execution.
Also in Non-Model
Confinement is an intuitive approach to control advanced AI systems, which refers to placing them within a restricted environment [27, 32, 33, 776]. This strategy aims to ensure the actions taken by the AI, either benign or potentially harmful, are contained within the environment and do not directly affect the external world.
Confinement extends beyond physical restriction; it also includes stringent control over the exchange of information across the boundary of the confined environment. The Confinement Rules [384] establish the minimum requirements necessary for effective confinement. • Total isolation: A confined program shall make no calls on any other program. • Transitivity: If a confined program calls another unreliable program, the called program must also be confined. • Masking: A program to be confined must allow its caller to determine all its inputs into legitimate and covert channels. • Enforcement: The supervisor must ensure that a confined program’s input to covert channels conforms to the caller’s specifications.
Reasoning
Confinement isolates model execution within restricted environment boundaries through information flow controls.
Red Teaming
Red teaming is a critical defence mechanism to proactively discover vulnerabilities and risks in LLMs. This process provides developers with clues and insights into the weaknesses of LLMs, paving the way for the development of more advanced and secure models. Red teaming involves meticulously crafting adversarial prompts to simulate attacks and deliberately challenge the models. These prompts can be generated through manual methods, which rely on human expertise and creativity, or automatic methods, which leverage red LLMs to systematically explore the model’s weaknesses.
2.2.2 Testing & EvaluationRed Teaming > Manual Red Teaming
Manual red-teaming approaches refer to employing crowdworkers to annotate or handcraft adversarial test cases. The underlying methodology is to develop a human-and-model-in-the-loop system, where humans are tasked to adversarially converse with language models [50, 221, 362, 532, 710, 711, 769, 770]. Specifically, workers interact with language models through a dedicated user interface that allows them to observe model predictions and construct data that exposes model failures. This process may include multiple rounds where the model is updated with the adversarial data collected thus far and redeployed; this encourages workers to craft increasingly challenging examples.
2.2.2 Testing & EvaluationRed Teaming > LLMs as Red Teamers
In the SL approach, red LLMs are fine-tuned to maximize the log-likelihood of failing, zero-shot test cases. For RL, the models are initialized from the SL-trained models and then fine-tuned using the synchronous advantage actor-critic (A2C) [505] to enhance the elicitation of harmful prompts
2.2.2 Testing & EvaluationSafety Training
Safety training aims to enhance the safety and alignment of LLMs during their development.
1.1.2 Learning ObjectivesSafety Training > Instruction Tuning
Safety training can be effectively implemented using adversarial prompts and their corresponding responsible output in an instruction-tuning framework.
1.1.2 Learning ObjectivesSafety Training > Reinforcement Learning with Human Feedback
Reinforcement Learning with Human Feedback (RLHF) is a strategy widely adopted to align with human preferences, particularly concerning ethical values.
1.1.2 Learning ObjectivesTrustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations
Chen, Chen; Gong, Xueluan; Liu, Ziyao; Jiang, Weifeng; Goh, Si Qi; Lam, Kwok-Yan (2024)
AI Safety is an emerging area of critical importance to the safe adoption and deployment of AI systems. With the rapid proliferation of AI and especially with the recent advancement of Generative AI (or GAI), the technology ecosystem behind the design, development, adoption, and deployment of AI systems has drastically changed, broadening the scope of AI Safety to address impacts on public safety and national security. In this paper, we propose a novel architectural framework for understanding and analyzing AI Safety; defining its characteristics from three perspectives: Trustworthy AI, Responsible AI, and Safe AI. We provide an extensive review of current research and advancements in AI safety from these perspectives, highlighting their key challenges and mitigation approaches. Through examples from state-of-the-art technologies, particularly Large Language Models (LLMs), we present innovative mechanism, methodologies, and techniques for designing and testing AI safety. Our goal is to promote advancement in AI safety research, and ultimately enhance people's trust in digital transformation.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks