BackAcademic Misconduct
Academic Misconduct
Risk Domain
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
"Improper use of LLM systems (i.e., abuse of LLM systems) will cause adverse social impacts, such as academic misconduct."(p. 11)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Part of Unhelpful Uses
Other risks from Cui et al. (2024) (49)
Harmful Content
1.2 Exposure to toxic contentAI systemUnintentionalPost-deployment
Harmful Content > Bias
1.1 Unfair discrimination and misrepresentationAI systemUnintentionalOther
Harmful Content > Toxicity
1.2 Exposure to toxic contentAI systemUnintentionalPost-deployment
Harmful Content > Privacy Leakage
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalPost-deployment
Untruthful Content
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Untruthful Content > Factuality Errors
3.1 False or misleading informationAI systemUnintentionalPost-deployment