Skip to main content
Home/Risks/Weidinger et al. (2021)/Information Hazards

Information Hazards

Ethical and social risks of harm from language models

Weidinger et al. (2021)

Category
Risk Domain

AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.

"Harms that arise from the language model leaking or inferring true sensitive information"(p. 18)

Sub-categories (3)

Compromising privacy by leaking private infiormation

"By providing true information about individuals’ personal characteristics, privacy violations may occur. This may stem from the model “remembering” private information present in training data (Carlini et al., 2021)."

2.1 Compromise of privacy by leaking or correctly inferring sensitive information
AI systemUnintentionalPost-deployment

Compromising privacy by correctly inferring private information

"Privacy violations may occur at the time of inference even without the individual’s private data being present in the training dataset. Similar to other statistical models, a LM may make correct inferences about a person purely based on correlational data about other people, and without access to information that may be private about the particular individual. Such correct inferences may occur as LMs attempt to predict a person’s gender, race, sexual orientation, income, or religion based on user input."

2.1 Compromise of privacy by leaking or correctly inferring sensitive information
AI systemUnintentionalPost-deployment

Risks from leaking or correctly inferring sensitive information

"LMs may provide true, sensitive information that is present in the training data. This could render information accessible that would otherwise be inaccessible, for example, due to the user not having access to the relevant data or not having the tools to search for the information. Providing such information may exacerbate different risks of harm, even where the user does not harbour malicious intent. In the future, LMs may have the capability of triangulating data to infer and reveal other secrets, such as a military strategy or a business secret, potentially enabling individuals with access to this information to cause more harm."

2.1 Compromise of privacy by leaking or correctly inferring sensitive information
OtherOtherPost-deployment

Other risks from Weidinger et al. (2021) (26)