This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Cryptographic protections, access controls, and hardware security.
Also in Non-Model
In addition to reducing hazardous knowledge embedded in LLMs, many researchers advocate building controlled interactions between AI systems and users through technical means. The aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely. As shown in Figure 13, as much control as possible is needed over two broad categories: (1) use controls, which govern the direct use of AI systems in terms of (who, what, when, where, why, how); and (2) modification and reproduction controls, which prevent unauthorized users from altering AI systems or building their own versions in a way that circumvents the use controls (Shevlane, 2022).
In application scenarios, a monitored APIs strategy should be employed to place high-risk models behind application programming interfaces (APIs) (Segerie, 2024). This is to control access to AI models that could pose extreme risks, while monitoring their activity. For instance, with this strategy, OpenAI’s API platform limits the ways in which the GPT3 models could be used, and Google Cloud’s Vision API is another application example. By implementing monitored APIs, access to these high-risk technologies is restricted to authorized users only. This strategy is akin to digital containment, limiting the potential for AI weaponization or misuse through stringent access controls. Moreover, this method also allows for detailed tracking of how AI models are being utilized, enabling the early detection of misuse patterns.
Reasoning
Mitigation name and evidence placeholders prevent identifying focal activity or implementation location.
Value Misalignment
99.9 OtherValue Misalignment > Mitigating social bias
1 AI SystemValue Misalignment > Privacy protection
1 AI SystemValue Misalignment > Methods for mitigating toxicity
1 AI SystemValue Misalignment > Methods for mitigating LLM amorality
1 AI SystemRobustness to attack
1 AI SystemLarge Language Model Safety: A Holistic Survey
Shi, Dan; Shen, Tianhao; Huang, Yufei; Li, Zhigen; Leng, Yongqi; Jin, Renren; Liu, Chuang; Wu, Xinwei; Guo, Zishan; Yu, Linhao; Shi, Ling; Jiang, Bojian; Xiong, Deyi (2024)
The rapid development and deployment of large language models (LLMs) have introduced a new frontier in artificial intelligence, marked by unprecedented capabilities in natural language understanding and generation. However, the increasing integration of these models into critical applications raises substantial safety concerns, necessitating a thorough examination of their potential risks and associated mitigation strategies. This survey provides a comprehensive overview of the current landscape of LLM safety, covering four major categories: value misalignment, robustness to adversarial attacks, misuse, and autonomous AI risks. In addition to the comprehensive review of the mitigation methodologies and evaluation resources on these four aspects, we further explore four topics related to LLM safety: the safety implications of LLM agents, the role of interpretability in enhancing LLM safety, the technology roadmaps proposed and abided by a list of AI companies and institutes for LLM safety, and AI governance aimed at LLM safety with discussions on international cooperation, policy proposals, and prospective regulatory directions. Our findings underscore the necessity for a proactive, multifaceted approach to LLM safety, emphasizing the integration of technical solutions, ethical considerations, and robust governance frameworks. This survey is intended to serve as a foundational resource for academy researchers, industry practitioners, and policymakers, offering insights into the challenges and opportunities associated with the safe integration of LLMs into society. Ultimately, it seeks to contribute to the safe and beneficial development of LLMs, aligning with the overarching goal of harnessing AI for societal advancement and well-being. A curated list of related papers has been publicly available at https://github.com/tjunlp-lab/Awesome-LLM-Safety-Papers.
Deploy
Releasing the AI system into a production environment
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks