BackObscene, Degrading, and/or Abusive Content
Home/Risks/National Institute of Standards and Technology (2024)/Obscene, Degrading, and/or Abusive Content
Home/Risks/National Institute of Standards and Technology (2024)/Obscene, Degrading, and/or Abusive Content
Obscene, Degrading, and/or Abusive Content
Risk Domain
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
"Eased production of and access to obscene, degrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse material (CSAM), and nonconsensual intimate images (NCII) of adults."(p. 4)
Entity— Who or what caused the harm
Intent— Whether the harm was intentional or accidental
Timing— Whether the risk is pre- or post-deployment
Supporting Evidence (3)
1.
"GAI can ease the production of and access to illegal non-consensual intimate imagery (NCII) of adults, and/or child sexual abuse material (CSAM). GAI-generated obscene, abusive or degrading content can create privacy, psychological and emotional, and even physical harms, and in some cases may be illegal."(p. 11)
2.
"Generated explicit or obscene AI content may include highly realistic “deepfakes” of real individuals, including children. The spread of this kind of material can have downstream negative consequences: in the context of CSAM, even if the generated images do not resemble specific individuals, the prevalence of such images can divert time and resources from efforts to find real-world victims. Outside of CSAM, the creation and spread of NCII disproportionately impacts women and sexual minorities, and can have subsequent negative consequences including decline in overall mental health, substance abuse, and even suicidal thoughts."(p. 11)
3.
"Data used for training GAI models may unintentionally include CSAM and NCII. A recent report noted that several commonly used GAI training datasets were found to contain hundreds of known images of CSAM. Even when trained on “clean” data, increasingly capable GAI models can synthesize or produce synthetic NCII and CSAM. Websites, mobile apps, and custom-built models that generate synthetic NCII have moved from niche internet forums to mainstream, automated, and scaled online businesses."(p. 11)
Other risks from National Institute of Standards and Technology (2024) (11)
CBRN Information or Capabilities
4.2 Cyberattacks, weapon development or use, and mass harmOtherOtherPost-deployment
Confabulation
3.1 False or misleading informationAI systemUnintentionalPost-deployment
Dangerous, Violent or Hateful Content
1.2 Exposure to toxic contentAI systemOtherPost-deployment
Data Privacy
2.1 Compromise of privacy by leaking or correctly inferring sensitive informationAI systemUnintentionalPost-deployment
Environmental Impacts
6.6 Environmental harmOtherUnintentionalPre-deployment
Harmful Bias or Homogenization
1.1 Unfair discrimination and misrepresentationOtherUnintentionalOther