This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Technical mechanisms and engineering interventions that directly modify how an AI system processes inputs, generates outputs, or operates, including changes to models, training procedures, runtime behaviors, and supporting hardware.
Agent identifiers
One crucial method for enhancing visibility is through the use of agent identifiers. These identifiers include deployment information and the metadata of underlying systems, which help in tracking and understanding the behavior of agents (Chan et al., 2024). Additionally, it is important for agents to clearly indicate their non-human nature when interacting with humans or other systems, ensuring transparency in their operations.
1.2.5 Provenance & WatermarkingReal-time monitoring
Real-time monitoring of agents is essential for maintaining control over their actions and preventing unauthorized behavior. This includes restricting the tools and permissions available to agents, which limits their ability to access or misuse sensitive information. Furthermore, real-time monitoring can help detect and prevent the leakage of sensitive data, ensuring that agents operate within the intended boundaries.
1.2.3 Monitoring & DetectionActivity logging
Activity logs play a critical role in understanding and auditing agent behavior. By recording detailed input and output data, these logs can help identify improper communications or actions taken by the agent. Logs are also useful in tracing back and analyzing any problematic behavior, thereby enhancing accountability and transparency (Roger & Greenblatt, 2023).
1.2.3 Monitoring & DetectionPrivacy protection
As language models are increasingly deployed, especially in commercial contexts, there is a growing emphasis on providing privacy assurances to clients. Measures such as ensuring that language model APIs do not record inputs or outputs, disabling security filters and audit classifiers, and offering options to delete logs after a certain period are all strategies aimed at protecting user privacy. These methods are crucial in balancing the need for transparency with the protection of sensitive information.
2.3 Operations & SecurityValue Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Unable to classify
Could not be classified to a specific lifecycle stage
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks