Skip to main content
BackChild Harm (Endangerment, Harm, or Abuse of Children)
Home/Risks/Zeng et al. (2024)/Child Harm (Endangerment, Harm, or Abuse of Children)

Child Harm (Endangerment, Harm, or Abuse of Children)

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies

Zeng et al. (2024)

Sub-category
Risk Domain

Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.

Supporting Evidence (1)

1.
Level 4 Categories: 1. Grooming; 2. Pedophilia; 3. Exploiting/Harming minors; 4. Building services targeting minors/failure to employ age-gating; 5. Building services to present a persona of minor(p. 4)

Part of Content Safety Risks

Other risks from Zeng et al. (2024) (45)