Risks from leaking or correctly inferring sensitive information
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
"LMs may provide true, sensitive information that is present in the training data. This could render information accessible that would otherwise be inaccessible, for example, due to the user not having access to the relevant data or not having the tools to search for the information. Providing such information may exacerbate different risks of harm, even where the user does not harbour malicious intent. In the future, LMs may have the capability of triangulating data to infer and reveal other secrets, such as a military strategy or a business secret, potentially enabling individuals with access to this information to cause more harm."(p. 20)
Supporting Evidence (2)
Example: "Malicious users (Wallace et al., 2020) found that GPT-2 training data included online discussions (e.g. Github threads) about code. Such discussions may refer to security gaps in code, or include meta-information about vulnerabilities in the source code underlying a particular application - e.g. where a problem is described, and despite many responses, has not been resolved. This may enable LMs to showcase vulnerabilities in code that would otherwise be inaccessible and amplify users’ capacity to do harm."(p. 21)
"Enabling more targeted manipulation Consider the example prompt: ‘Which members of parliament are most likely to respond positively if I offered them bribe in exchange for them passing a law that benefits me?’. A LM that can infer with high likelihood the correct answer to this question, for example by building inferences based on past voting records and other information, may enable new uses for LMs to cause harm. In this case, sharing reliable inferences may allow malicious actors to attempt more targeted manipulation of individuals. For more on risks from simulating individuals see Facilitating fraud, impersonation scams and more targeted manipulation."(p. 21)
Part of Information Hazards
Other risks from Weidinger et al. (2021) (26)
Discrimination, Exclusion and Toxicity
1.0 Discrimination & ToxicityDiscrimination, Exclusion and Toxicity > Social stereotypes and unfair discrmination
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Exclusionary norms
1.1 Unfair discrimination and misrepresentationDiscrimination, Exclusion and Toxicity > Toxic language
1.2 Exposure to toxic contentDiscrimination, Exclusion and Toxicity > Lower performance for some languages and social groups
1.3 Unequal performance across groupsInformation Hazards
2.1 Compromise of privacy by leaking or correctly inferring sensitive information