Decision-making on inferred private data
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
"Current GPAIs (LLMs and multimodal LLM-based models) have significant capability to infer correlations in text data. In some cases, they may be able to make highly accurate data inferences on users based on contextual input that users provide [134]. These data inferences can “leak” or reveal sensitive information about the user, cause unfair treatment, or enable manipulation of user behavior."(p. 53)
Supporting Evidence (2)
"Some information that can be inferred from user input may include age, gender, political leanings, and country of birth. While this information might not be present explicitly in the data, it may be easier for a GPAI system to infer this information compared to a human."(p. 53)
"This capability may be used for both intentional manipulation (e.g., personalized or targeted advertising, malicious actors using GPAIs for influence campaigns) or unintentional manipulation (e.g., different responses to factual questions by models trained to be agreeable or helpful, when asked by different demograph- ics)."(p. 53)
Other risks from Gipiškis2024 (144)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content