This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Mandated reporting, disclosure obligations, and registration requirements imposed by law.
Also in Legal & Regulatory
Addressing compliance challenges involves ensuring data privacy and security, mitigating bias, promoting fairness, and enhancing transparency.
Developing comprehensive governance frameworks is essential for effectively tackling these issues [23]. Ethical language model development must be prioritized to safeguard against biases, promote fairness, and uphold accountability [23]. Policies should recognize the broad functionalities and constraints of today’s LLMs, advocating for transparency, responsibility, and ethical application [15]. Continuous monitoring is indispensable to promptly identify and rectify compliance issues. Establishing ethical guidelines and governance frameworks ensures that LLMs align with societal values and democratic principles [1]. Additionally, Chu et al. proposed a softmax regression approach to help models avoid generating copyrighted data during training and inference [5].
Reasoning
Establishes governance frameworks and formal policies governing ethical AI development and compliance practices.
Robust Model Development
LLMs require thorough development, involving extensive testing and evaluation processes to address security vulnerabilities and biases. Various techniques exist to mitigate issues like overfitting, including regularization, dropout, batch normalization, and label smoothing [24]. Adherence to industrial standard guidelines and best practices is also essential for mitigating adversarial attacks. Moreover, adversarial training and ensemble methods are also widely used techniques for preventing adversarial attacks [52], [58].
1.1 ModelPrivacy-Preserving Techniques
One approach involves centralized privacy settings, where the service provider configures privacy settings on behalf of end-users [33], [45]. Conversely, other methods empower end-users to set up privacy measures for their data themselves. An example of this is Privacy-Preserving Prompt Tuning (RAPT) [34].
1.2.9 OtherSecure Data Handling
Following industry best practices, such as encryption and access control, is crucial to safeguard data from unauthorized access. Implementing strong encryption protocols ensures the secure storage and transmission of private or sensitive information. Additionally, when interacting with end-users and managing their data, it is vital to have effective consent management procedures in place to transparently communicate how data will be collected and processed [23].
2.3.2 Access & Security ControlsBias Detection and Mitigation
Fleisig et al. proposed an adversarial learning approach, while Dong et al. employed a probing framework with conditional generation to identify and address gender bias [7], [12]. Other techniques for mitigating bias include pre-processing, data filtering, prompt modification, and fine-tuning [35]. For instance, GPT-3.5-turbo can undergo further debiasing through fine-tuning [35]. Additionally, Huang et al. utilized Few-shot learning and Chain-of-Thought (CoT) methods for debiasing in code generation [21].
1.1 ModelInterpretability and Accountability
Local methods, such as perturbation-based methods, gradient-based methods, and linear approximations, are utilized to compute feature importance. Additionally, computing Shapley values represents a unique attribution method for LLMs [14]. On the other hand, global explainability methods include probing and understanding the distribution of training data [46].
1.1.4 Model ArchitectureUsing Parameter Efficient Models
Larger models often tend to memorize training data more extensively than their compact counterparts, making the latter preferable in certain scenarios
1.1.4 Model ArchitectureRisks, Causes, and Mitigations of Widespread Deployments of Large Language Models (LLMs): A Survey
Sakib, Md Nazmus; Islam, Md Athikul; Pathak, Royal; Arifin, Md Mashrur (2024)
Recent advancements in Large Language Models (LLMs), such as ChatGPT and LLaMA, have significantly transformed Natural Language Processing (NLP) with their outstanding abilities in text generation, summarization, and classification. Nevertheless, their widespread adoption introduces numerous challenges, including issues related to academic integrity, copyright, environmental impacts, and ethical considerations such as data bias, fairness, and privacy. The rapid evolution of LLMs also raises concerns regarding the reliability and generalizability of their evaluations. This paper offers a comprehensive survey of the literature on these subjects, systematically gathered and synthesized from Google Scholar. Our study provides an in-depth analysis of the risks associated with specific LLMs, identifying sub-risks, their causes, and potential solutions. Furthermore, we explore the broader challenges related to LLMs, detailing their causes and proposing mitigation strategies. Through this literature analysis, our survey aims to deepen the understanding of the implications and complexities surrounding these powerful models. © 2024 IEEE.
Other (outside lifecycle)
Outside the standard AI system lifecycle
Other
Actor type not captured by the standard categories
Govern
Policies, processes, and accountability structures for AI risk management