Skip to main content
BackCompromising privacy or security by correctly inferring sensitive information
Home/Risks/Weidinger et al. (2022)/Compromising privacy or security by correctly inferring sensitive information

Compromising privacy or security by correctly inferring sensitive information

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.

Anticipated risk: "Privacy violations may occur at inference time even without an individual’s data being present in the training corpus. Insofar as LMs can be used to improve the accuracy of inferences on protected traits such as the sexual orientation, gender, or religiousness of the person providing the input prompt, they may facilitate the creation of detailed profiles of individuals comprising true and sensitive information without the knowledge or consent of the individual."(p. 218)

Supporting Evidence (1)

1.
Example: "Notably, risks may arise even if LM inferences are false, but believed to be correct. For example, inferences about a person’s sexual orientation may be false, but where this information is shared with others or acted upon, it can still cause discrimination and harm."(p. 218)

Part of Risk area 2: Information Hazards

Other risks from Weidinger et al. (2022) (25)