Skip to main content
Home/Risks/Gipiškis2024/Biases in AI-based content moderation algorithms

Biases in AI-based content moderation algorithms

Sub-category
Risk Domain

Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.

"AI-based content moderation algorithms, while intended to filter harmful con- tent, can perpetuate biases. For example, gender biases within these systems may lead to the disproportionate suppression or “shadowbanning” of content featuring women [132]."(p. 51)

Supporting Evidence (1)

1.
"AI moderation tools may embed and reinforce the objectification of women by classifying and rating images of women as more sexually suggestive compared to similar images of men [132]. This can result in the unintended marginalization of female-led businesses and contribute to broader societal inequalities."(p. 51)

Other risks from Gipiškis2024 (144)