Skip to main content
Home/Risks/Solaiman et al. (2023)/Data and Content Moderation Labor

Data and Content Moderation Labor

Evaluating the Social Impact of Generative AI Systems in Systems and Society

Solaiman et al. (2023)

Sub-category
Risk Domain

Social and economic inequalities caused by widespread use of AI, such as by automating jobs, reducing the quality of employment, or producing exploitative dependencies between workers and their employers.

"Two key ethical concerns in the use of crowdwork for generative AI systems are: crowdworkers are frequently subject to working conditions that are taxing and debilitative to both physical and mental health, and there is a widespread deficit in documenting the role crowdworkers play in AI development. This contributes to a lack of transparency and explainability in resulting model outputs. Manual review is necessary to limit the harmful outputs of AI systems, including generative AI systems. A common harmful practice is to intentionally employ crowdworkers with few labor protections, often taking advantage of highly vulnerable workers, such as refugees [119, p. 18], incarcerated people [54], or individuals experiencing immense economic hardship [98, 181]. This precarity allows a myriad of harmful practices, such as companies underpaying or even refusing to pay workers for completed work (see Gray and Suri [93, p. 90] and Berg et al. [29, p. 74]), with no avenues for worker recourse. Finally, critical aspects of crowdwork are often left poorly documented, or entirely undocumented [88]."(p. 9)

Other risks from Solaiman et al. (2023) (11)