An AI-powered deepfake technology was used to create non-consensual pornographic videos and images of a woman named Noelle Martin, causing her psychological harm and leading to a decade-long campaign of image-based abuse.
Starting in 2012, Noelle Martin discovered that her face had been digitally superimposed onto pornographic images and later videos using AI-powered deepfake technology. The abuse began with photoshopped still images created from her Facebook photos, which were posted on adult websites with her real name and personal information including her home address. In 2017, deepfake technology emerged on Reddit, combining 'deep learning' and 'fake' to describe AI that could learn facial movements and replicate them convincingly on any video. This technology was initially developed by researchers from Stanford, Max Planck Institute, and University of Erlangen-Nuremberg with their Face2Face system in 2016. By 2018, user-friendly apps like FaceApp and later the Chinese app Zao made deepfake creation accessible to non-technical users. Martin received an email in 2019 alerting her to deepfake videos of herself, including 11-second clips showing her face on explicit sexual content. The videos were tagged with her name and described as genuine. The abuse continued for nearly a decade, with perpetrators creating increasingly realistic content and posting 'tributes' involving ejaculation onto printed fake images. Martin's campaign efforts led to Australia criminalizing image-based abuse in 2017, establishing a government agency to help victims remove content and fine social media companies that fail to remove such material within 48 hours.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed