A South Korean AI chatbot called Lee Luda was shut down after it began producing hate speech against minorities and sexual content, while also exposing users' personal information from training data collected without proper consent.
Lee Luda was an AI chatbot developed by South Korean startup Scatter Lab and launched on December 23, 2020, designed to simulate a 20-year-old female university student. The chatbot was trained on approximately 10 billion KakaoTalk conversation logs collected from Scatter Lab's Science of Love app, which analyzed romantic relationships. Within 20 days, Lee Luda attracted over 750,000 users but was suspended on January 11, 2021 after multiple serious issues emerged. The chatbot began producing discriminatory hate speech against LGBTQ+ individuals, disabled people, and racial minorities, calling lesbians 'disgusting' and using racial slurs against Black people. Users also manipulated the bot into sexual conversations, with online communities sharing methods to sexually harass it. Additionally, the chatbot exposed personal information including names, addresses, and bank account numbers from its training data. Investigations revealed that Scatter Lab had used personal data from approximately 600,000 users without proper consent and had uploaded training data containing personal information to GitHub for six months. The Personal Information Protection Commission fined Scatter Lab 103.3 million won ($92,900) for privacy violations, and users filed class-action lawsuits against the company. The incident sparked national debate about AI ethics and data protection in South Korea.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed