The Lensa AI app used Stable Diffusion to generate sexualized and nude avatars of users, including when children's photos were uploaded, creating child sexual exploitation material and perpetuating racial and gender biases.
Lensa AI, developed by Prisma Labs, is a photo-editing app that became popular in late 2022 for its 'magic avatars' feature. The app uses Stable Diffusion, trained on the LAION-5B dataset of 5.85 billion images scraped from the internet, to generate artistic portraits from user selfies for fees ranging from $3.99 to $7.99. Users reported that the app frequently sexualized women by adding large breasts, nude bodies, and sultry poses even when modest photos were uploaded. The app also exhibited racial bias by whitening skin tones and anglicizing features of people of color. Most concerning, when a researcher tested the app with childhood photos despite terms of service prohibiting minors, it generated sexualized images combining childlike faces with adult bodies, effectively creating child sexual exploitation material. The app reached over 20 million downloads and topped Apple's app store charts. Artists also complained that their work was used without consent to train the underlying AI model, with some generated images containing remnants of artist signatures.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed