Privacy
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
Generative AI systems, similar to traditional machine learning methods, are considered a threat to privacy and data protection norms. A major concern is the intended extraction or inadvertent leakage of sensitive or private information from LLMs. To mitigate this risk, strategies such as sanitizing training data to remove sensitive information or employing synthetic data for training are proposed.(p. 6)
Other risks from Hagendorff (2024) (16)
Fairness - Bias
1.1 Unfair discrimination and misrepresentationSafety
7.1 AI pursuing its own goals in conflict with human goals or valuesHarmful Content - Toxicity
1.2 Exposure to toxic contentHallucinations
3.1 False or misleading informationInteraction risks
5.1 Overreliance and unsafe useSecurity - Robustness
2.2 AI system security vulnerabilities and attacks