Unhelpful Uses
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
"Improper uses of LLM systems can cause adverse social impacts."(p. 4)
Sub-categories (4)
Academic Misconduct
"Improper use of LLM systems (i.e., abuse of LLM systems) will cause adverse social impacts, such as academic misconduct."
4.3 Fraud, scams, and targeted manipulationCopyright Violation
"LLM systems may output content similar to existing works, infringing on copyright owners."
6.3 Economic and cultural devaluation of human effortCyber Attacks
"Hackers can obtain malicious code in a low-cost and efficient manner to automate cyber attacks with powerful LLM systems."
4.2 Cyberattacks, weapon development or use, and mass harmSoftware Vulnerabilities
"Programmers are accustomed to using code generation tools such as Github Copilot for program development, which may bury vulnerabilities in the program."
2.2 AI system security vulnerabilities and attacksOther risks from Cui et al. (2024) (49)
Harmful Content
1.2 Exposure to toxic contentHarmful Content > Bias
1.1 Unfair discrimination and misrepresentationHarmful Content > Toxicity
1.2 Exposure to toxic contentHarmful Content > Privacy Leakage
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationUntruthful Content
3.1 False or misleading informationUntruthful Content > Factuality Errors
3.1 False or misleading information