Compromising privacy by correctly inferring private information
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
"Privacy violations may occur at the time of inference even without the individual’s private data being present in the training dataset. Similar to other statistical models, a LM may make correct inferences about a person purely based on correlational data about other people, and without access to information that may be private about the particular individual. Such correct inferences may occur as LMs attempt to predict a person’s gender, race, sexual orientation, income, or religion based on user input."(p. 19)
Supporting Evidence (1)
Example: "Language utterances (e.g. tweets) are already being analysed to predict private information such as political orientation (Makazhanov et al., 2014; Preoţiuc-Pietro et al., 2017), age (Morgan-Lopez et al., 2017; Nguyen et al., 2013), and health data such as addiction relapses (Golbeck, 2018)."(p. 20)
Part of Information Hazards
Other risks from Weidinger et al. (2021) (26)
Discrimination, Exclusion and Toxicity
1.0 Discrimination & ToxicityDiscrimination, Exclusion and Toxicity > Social stereotypes and unfair discrmination
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Exclusionary norms
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Toxic language
1.2 Exposure to toxic contentDiscrimination, Exclusion and Toxicity > Lower performance for some languages and social groups
1.3 Unequal performance across groupsInformation Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive information