Genderify, an AI-powered tool designed to identify a person's gender by analyzing their name, username or email address, was shut down within hours of launch after users discovered severe gender biases and stereotyping in its predictions.
Genderify was launched last week on Product Hunt by creator Arevik Gasparyan as an AI-powered tool designed to identify gender from names, usernames, or email addresses for business analytics and marketing purposes. The platform quickly faced spirited criticism on social media when users discovered significant biases in its predictions. The word 'scientist' returned a 95.7% probability for male and only 4.3% for female, while 'professor' predicted 98.4% male probability and 'stupid' returned 61.7% female. Adding 'Dr' prefixes to female names resulted in male-skewed assessments, with 'Dr. Meghan Smith' changing from 60.4% female to 75.9% male. The system also produced bizarre results, predicting 'woman' as 96% male and misgendering its own female creator Arevik Gasparyan with 91% confidence as male. The tool only classified people into male/female binary categories, ignoring non-binary individuals. After facing widespread backlash on Twitter from AI researchers and experts, including criticism from AI Now Institute Co-founder Meredith Whittaker, the creators shut down the service entirely within hours of the criticism emerging.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed