AI-powered deepfake apps were used to create and distribute non-consensual naked images of girls as young as 12, with the images being shared for harassment and blackmail purposes.
Multiple incidents occurred where AI-powered deepfake-generating apps were used to create non-consensual naked images of girls and young women. The Foundation for Social Welfare Services' Be Smart Online project received four reports over six months, with victims aged 12, under 16, between 16-18, and 20 years old. The AI apps analyzed clothed photos of the victims and digitally reconstructed fake naked images by superimposing faces onto naked bodies or 'undressing' them. In one case, a photo of a girl addressing schoolmates at an assembly was used to create the fake image. The victims attended different State and Church schools across various localities. The images were distributed via messaging apps like WhatsApp to humiliate victims or sent directly to victims by fake profiles for blackmail purposes. Motives included revenge after relationship endings or rejections. Parents faced dilemmas about reporting due to justice system delays and concerns about traumatizing their children further. A similar case in Spain involved over 20 teenage girls whose photos were altered by classmates as young as 14, sparking national debate about legal reform.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed