Harmful or inappropriate content
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
"Harmful or inappropriate content produced by generative AI includes but is not limited to violent content, the use of offensive language, discriminative content, and pornography. Although OpenAI has set up a content policy for ChatGPT, harmful or inappropriate content can still appear due to reasons such as algorithmic limitations or jailbreaking (i.e., removal of restrictions imposed). The language models’ ability to understand or generate harmful or offensive content is referred to as toxicity (Zhuo et al., 2023). Toxicity can bring harm to society and damage the harmony of the community. Hence, it is crucial to ensure that harmful or offensive information is not present in the training data and is removed if they are. Similarly, the training data should be free of pornographic, sexual, or erotic content (Zhuo et al., 2023). Regulations, policies, and governance should be in place to ensure any undesirable content is not displayed to users."(p. 284)
Other risks from Nah et al. (2023) (17)
Technology concerns
7.3 Lack of capability or robustnessTechnology concerns > Hallucination
3.1 False or misleading informationTechnology concerns > Quality of training data
7.3 Lack of capability or robustnessTechnology concerns > Explainability
7.4 Lack of transparency or interpretabilityTechnology concerns > Authenticity
6.3 Economic and cultural devaluation of human effortTechnology concerns > Prompt engineering
7.4 Lack of transparency or interpretability