OpenAI used outsourced Kenyan workers earning less than $2 per hour to label extremely disturbing content including child sexual abuse, violence, and torture to build safety filters for ChatGPT, causing severe psychological trauma to workers.
OpenAI contracted with Sama, a San Francisco-based outsourcing firm, from November 2021 to February 2022 to have Kenyan workers label toxic content for building safety filters for ChatGPT. Around 50 workers in Kenya were paid between $1.32 and $2 per hour to read and categorize tens of thousands of text passages describing graphic violence, child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. The workers were expected to label 150-250 passages per nine-hour shift, with some passages over 1,000 words long. OpenAI paid Sama $12.50 per hour for the service across three contracts worth about $200,000 total. The work caused severe psychological trauma to workers, with reports of nightmares, recurring disturbing visions, social withdrawal, and relationship breakdowns. One worker reported being unable to get close to his stepdaughter after reading descriptions of child abuse. Workers had limited access to mental health support despite promises of counseling. The project was terminated early in February 2022 after eight months when Sama ended its relationship with OpenAI following a dispute over a separate image collection project that included illegal content. The early termination resulted in job losses and reduced income for workers who had been receiving bonuses for handling explicit content.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Social and economic inequalities caused by widespread use of AI, such as by automating jobs, reducing the quality of employment, or producing exploitative dependencies between workers and their employers.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Pre-deployment
Occurring before the AI is deployed