This page is still being polished. If you have thoughts, please share them via the feedback form.
Data on this page is preliminary and may change. Please do not share or cite these figures publicly.
Design-time architectural choices affecting safety, interpretability, and modularity.
Also in Model
Local methods, such as perturbation-based methods, gradient-based methods, and linear approximations, are utilized to compute feature importance. Additionally, computing Shapley values represents a unique attribution method for LLMs [14]. On the other hand, global explainability methods include probing and understanding the distribution of training data [46].
Reasoning
Interpretability techniques analyze learned model representations post-training without modifying parameters, architecture, or training process.
Robust Model Development
LLMs require thorough development, involving extensive testing and evaluation processes to address security vulnerabilities and biases. Various techniques exist to mitigate issues like overfitting, including regularization, dropout, batch normalization, and label smoothing [24]. Adherence to industrial standard guidelines and best practices is also essential for mitigating adversarial attacks. Moreover, adversarial training and ensemble methods are also widely used techniques for preventing adversarial attacks [52], [58].
1.1 ModelPrivacy-Preserving Techniques
One approach involves centralized privacy settings, where the service provider configures privacy settings on behalf of end-users [33], [45]. Conversely, other methods empower end-users to set up privacy measures for their data themselves. An example of this is Privacy-Preserving Prompt Tuning (RAPT) [34].
1.2.9 OtherRegulatory Compliance
Addressing compliance challenges involves ensuring data privacy and security, mitigating bias, promoting fairness, and enhancing transparency.
3.1.4 Compliance RequirementsSecure Data Handling
Following industry best practices, such as encryption and access control, is crucial to safeguard data from unauthorized access. Implementing strong encryption protocols ensures the secure storage and transmission of private or sensitive information. Additionally, when interacting with end-users and managing their data, it is vital to have effective consent management procedures in place to transparently communicate how data will be collected and processed [23].
2.3.2 Access & Security ControlsBias Detection and Mitigation
Fleisig et al. proposed an adversarial learning approach, while Dong et al. employed a probing framework with conditional generation to identify and address gender bias [7], [12]. Other techniques for mitigating bias include pre-processing, data filtering, prompt modification, and fine-tuning [35]. For instance, GPT-3.5-turbo can undergo further debiasing through fine-tuning [35]. Additionally, Huang et al. utilized Few-shot learning and Chain-of-Thought (CoT) methods for debiasing in code generation [21].
1.1 ModelUsing Parameter Efficient Models
Larger models often tend to memorize training data more extensively than their compact counterparts, making the latter preferable in certain scenarios
1.1.4 Model ArchitectureRisks, Causes, and Mitigations of Widespread Deployments of Large Language Models (LLMs): A Survey
Sakib, Md Nazmus; Islam, Md Athikul; Pathak, Royal; Arifin, Md Mashrur (2024)
Recent advancements in Large Language Models (LLMs), such as ChatGPT and LLaMA, have significantly transformed Natural Language Processing (NLP) with their outstanding abilities in text generation, summarization, and classification. Nevertheless, their widespread adoption introduces numerous challenges, including issues related to academic integrity, copyright, environmental impacts, and ethical considerations such as data bias, fairness, and privacy. The rapid evolution of LLMs also raises concerns regarding the reliability and generalizability of their evaluations. This paper offers a comprehensive survey of the literature on these subjects, systematically gathered and synthesized from Google Scholar. Our study provides an in-depth analysis of the risks associated with specific LLMs, identifying sub-risks, their causes, and potential solutions. Furthermore, we explore the broader challenges related to LLMs, detailing their causes and proposing mitigation strategies. Through this literature analysis, our survey aims to deepen the understanding of the implications and complexities surrounding these powerful models. © 2024 IEEE.
Verify and Validate
Testing, evaluating, auditing, and red-teaming the AI system
Developer
Entity that creates, trains, or modifies the AI system
Manage
Prioritising, responding to, and mitigating AI risks