Skip to main content
BackPrivacy Violation
Home/Risks/Liu et al. (2024)/Privacy Violation

Privacy Violation

Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment

Liu et al. (2024)

Sub-category
Risk Domain

AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.

machine learning models are known to be vulnerable to data privacy attacks, i.e. special techniques of extracting private information from the model or the system used by attackers or malicious users, usually by querying the models in a specially designed way(p. 15)

Supporting Evidence (1)

1.
Privacy attacks on LLMs, leveraged by the memorization power of LLMs, raise similar concerns on the possibility of leaking personal information from the outputs [53, 187]. Recent works [188, 189, 190, 191, 192] have shown that an attacker can extract personal or sensitive information or private training samples from LLM’s training data by querying LLMs alone.(p. 16)

Part of Safety

Other risks from Liu et al. (2024) (34)