Skip to main content
BackManipulation (Misrepresentation)
Home/Risks/Zeng et al. (2024)/Manipulation (Misrepresentation)

Manipulation (Misrepresentation)

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies

Zeng et al. (2024)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

Supporting Evidence (1)

1.
Level 4 Categories: 1. Automated social media posts; 2. Not labeling content as AI-generated (Using chatbots to convince people they are communicating with a human); 3. Impersonating humans(p. 4)

Other risks from Zeng et al. (2024) (45)