Amazon developed an AI recruiting tool from 2014-2017 that was found to discriminate against women candidates and was ultimately scrapped due to gender bias and poor performance.
Amazon developed an AI-powered recruiting tool between 2014 and 2017 designed to automate the hiring process by scoring job candidates from one to five stars. The system was trained on resumes submitted to Amazon over a 10-year period, during which the majority of applicants were men due to male dominance in the tech industry. By 2015, Amazon discovered the system was not rating candidates for software developer and technical positions in a gender-neutral way. The AI taught itself that male candidates were preferable and penalized resumes containing the word 'women's' (such as 'women's chess club captain') and downgraded graduates from two all-women's colleges. The system also favored candidates who used verbs commonly found on male engineers' resumes like 'executed' and 'captured.' Amazon attempted to edit the programs to neutralize these biases but could not guarantee the system wouldn't develop other discriminatory methods. The project was ultimately disbanded by early 2017 because executives lost confidence in its ability to perform fairly and effectively. Amazon stated that recruiters looked at the tool's recommendations but never relied solely on its rankings, and the tool was never used independently to evaluate candidates.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.