This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Input validation, output filtering, and content moderation classifiers.
Also in Non-Model
To improve the safety of LLMs, introducing a dedicated safety control module presents a promising avenue for exploration.
Current LLM architectures often lack integrated, real-time safety checks that can monitor and mitigate potential threats or vulnerabilities as they emerge. A safety control module could function as an intermediary between the LLM and its output, intercepting and scrutinizing responses before they are delivered to users. Such a module would be designed to detect harmful content, privacy breaches, or other forms of unsafe behavior dynamically. For instance, Inan et al. (2023) propose an LLM-based input-output safeguard to enhance AI safety and content moderation by classifying human prompts and model outputs. This exploration lays a foundation for future research that could further refine safety control modules. It is worth noting that the design and implementation of such a module come with significant challenges. The module must be highly adaptable, capable of identifying a wide range of safety issues without compromising the efficiency or accuracy of LLMs. Additionally, it must be scalable, as LLMs continue to grow in size and complexity. There is also a risk of the module over-filtering outputs, potentially leading to unnecessary censorship or inhibiting the LLM’s ability to generate creative or novel responses. Therefore, developing a balance between rigorous safety checks and maintaining the LLM’s core functionality will be essential. Future research should focus on refining these mechanisms, exploring how such modules can operate autonomously while allowing for human oversight when necessary.
Reasoning
Safety control modules filter or block harmful outputs before system delivery.
Future directions
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Build and Use Model
Training, fine-tuning, and integrating the AI model
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks
Primary
7 AI System Safety, Failures & LimitationsOther