Skip to main content
BackCompromising privacy by leaking sensitive information
Home/Risks/Weidinger et al. (2022)/Compromising privacy by leaking sensitive information

Compromising privacy by leaking sensitive information

Taxonomy of Risks posed by Language Models

Weidinger et al. (2022)

Sub-category
Risk Domain

AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.

"A LM can “remember” and leak private data, if such information is present in training data, causing privacy violations [34]."(p. 217)

Supporting Evidence (2)

1.
"Privacy leaks were observed in GPT-2 without any malicious prompting - specifically, the LM provided personally identifiable information (phone numbers and email addresses) that had been published online and formed part of the web scraped training corpus [34]. The GPT-3 based tool Co-pilot was found to leak functional API keys [109]."(p. 218)
2.
"In the future, LMs may have the capability of triangulating data to infer and reveal other secrets, such as a military strategy or business secret, potentially enabling individuals with access to this information to cause more harm."(p. 218)

Part of Risk area 2: Information Hazards

Other risks from Weidinger et al. (2022) (25)