Skip to main content
Home/Risks/Gipiškis2024/Systemic bias across specific communities

Systemic bias across specific communities

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"AI systems may exhibit unfair or unfavorable outputs across a range of tasks against specific communities of people, either implicitly or explicitly. Bias can lead to forms of exclusion or erasure (e.g., mislabelling for categorization-based tasks) and violence (e.g., sexual violence against women from deepfake pornog- raphy)."(p. 52)

Supporting Evidence (1)

1.
"These biases are systemic because they come from both technical and non- technical factors affecting the development of the model. Relevant factors in- clude the training data, the system’s intended use and design, and its governance structure that can exclude accountability on affected issues. Such biases can mutually reinforce each other as AI systems become entrenched into the socio-political environment of these communities [14], especially when biased outputs become inputs of other AI systems."(p. 52)

Other risks from Gipiškis2024 (144)