Young men and teenage boys in South Korea used AI deepfake technology to create sexually explicit images and videos of female classmates, teachers, and acquaintances, sharing them through Telegram chat rooms with hundreds of thousands of members.
In August 2024, South Korea was shocked by the discovery of widespread deepfake pornography targeting women and girls, with investigations revealing extensive networks of Telegram chat rooms where users shared sexually explicit AI-generated content. The deepfakes were created using readily available AI applications that could generate pornographic images and videos by combining victims' faces from social media photos with explicit content. One Telegram channel reportedly had over 220,000 members, and investigators identified hundreds of schools and universities as targets. The technology was sophisticated enough that ordinary people found it difficult to distinguish fake content from real images. South Korean police reported 297 cases of deepfake sex crimes between January and July 2024, compared to 156 for all of 2021. Of 178 suspects identified in the first seven months of 2024, 131 were teenagers. The chat rooms were organized systematically, with some requiring members to post multiple photos along with personal information like names, ages, and locations. Many victims were minors, and the scale was so extensive that nearly every middle school, high school, and university in South Korea appeared to have associated 'humiliation rooms.' President Yoon Suk Yeol called for authorities to 'root out' these digital sex crimes, and police detained seven male suspects, six of them teenagers. The incident follows South Korea's previous experience with the 'Nth room' scandal in 2019-2020, where similar exploitation occurred on Telegram.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed