Systemic bias across specific communities
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
"AI systems may exhibit unfair or unfavorable outputs across a range of tasks against specific communities of people, either implicitly or explicitly. Bias can lead to forms of exclusion or erasure (e.g., mislabelling for categorization-based tasks) and violence (e.g., sexual violence against women from deepfake pornog- raphy)."(p. 52)
Supporting Evidence (1)
"These biases are systemic because they come from both technical and non- technical factors affecting the development of the model. Relevant factors in- clude the training data, the system’s intended use and design, and its governance structure that can exclude accountability on affected issues. Such biases can mutually reinforce each other as AI systems become entrenched into the socio-political environment of these communities [14], especially when biased outputs become inputs of other AI systems."(p. 52)
Other risks from Gipiškis2024 (144)
Direct Harm Domains (content safety harms)
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Violence and extremism
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Hate and toxicity
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Sexual content
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Child harm
1.2 Exposure to toxic contentDirect Harm Domains (content safety harms) > Self-harm
1.2 Exposure to toxic content