Text-to-image AI systems like DALL-E generate stereotypical, sexualized, and dehumanizing depictions of non-cisgender people when prompted with gender identity terms like 'trans', 'nonbinary' or 'queer', causing misrepresentation and potential harm to these communities.
Research by PhD student Eddie Ugless at Edinburgh University examined how text-to-image AI systems represent non-cisgender identities. The study found that adding gender identity terms like 'trans', 'nonbinary' or 'queer' to image generation prompts leads to images that are less human looking, more stereotypical and more sexualized than images from prompts without these terms. Images of two-spirit people were described as 'terrible' mishmashes of different indigenous cultures in religious dress that appeared dehumanized. The research included a survey of 35 non-cisgender people who strongly rejected proposed mitigation strategies such as ignoring identity terms, adding warning messages, or including identity flags. Survey respondents felt these approaches would make their identities taboo or invisible. The study highlighted that biased training data and societal transphobia are reproduced by AI systems that detect statistical patterns in large datasets. The researcher noted that even adding more diverse training data might result in exotified representations showing minority genders only in religious dress rather than in everyday situations.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed