An AI image expansion tool used by conference organizers inappropriately altered a woman's professional photo by unbuttoning her shirt and adding suggestive content including a bra underneath.
Elizabeth Laraki, a former Google employee, discovered that her professional photo for an AI conference had been altered when used in promotional materials. The conference social media manager had used an AI image expansion tool to make Laraki's cropped square photo taller for advertising purposes. The AI system generated the bottom portion of the image by unbuttoning her blouse further, adding tension around the buttons, and revealing hints of undergarments underneath. The AI also removed pockets from her shirt and her necklace medallion. When Laraki contacted the conference organizers, they immediately apologized and removed all promotional content containing the altered image. The incident occurred in October 2024 and was widely shared on social media, generating over 2 million views on her Twitter post describing the experience.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed