Stanford researchers developed an AI system that could predict sexual orientation from dating profile photos with 81% accuracy for men and 71% for women, sparking controversy about privacy risks and potential misuse by governments in countries where homosexuality is criminalized.
Stanford Graduate School of Business professor Michal Kosinski and co-author Yilun Wang developed an AI system designed to demonstrate privacy risks by detecting sexual orientation from facial photographs. The researchers scraped over 75,000 dating profiles from the United States, processing 300,000 images down to 35,000 that met their criteria. All subjects were white due to insufficient data for statistical validity among minorities. Using a deep neural network and facial analysis software, the system achieved 81% accuracy in identifying gay men and 71% accuracy for women when given pairs of photos. When provided with five photos per person, accuracy increased to 91% for men and 83% for women. The study was intended to raise awareness about privacy vulnerabilities, particularly given the potential for misuse by governments in countries where homosexuality is illegal. The research faced fierce backlash from LGBTQ advocacy groups who called it 'junk science' that threatens safety and privacy. Critics argued it represented a revival of discredited physiognomy theories. Kosinski received death threats and required campus police intervention. The study was based on the prenatal hormone theory of sexual orientation, which many experts disputed as inadequately supported. The work highlighted concerns about facial analysis technology being misused for surveillance and discrimination purposes.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed