Skip to main content
Home/Risks/Liu et al. (2024)/Lack of Interpretability

Lack of Interpretability

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.

Due to the black box nature of most machine learning models, users typically are not able to understand the reasoning behind the model decisions(p. 22)

Part of Explainability & Reasoning

Other risks from Liu et al. (2024) (34)