Scammers used AI deepfake technology to create fraudulent advertisements featuring stolen identities of content creators and influencers, manipulating their videos to promote erectile dysfunction supplements and other products without consent.
Multiple content creators and influencers had their identities stolen and manipulated using AI deepfake technology to create fraudulent advertisements. Michel Janse, a 27-year-old Christian social media influencer, discovered during her honeymoon that scammers had used her likeness in a YouTube commercial promoting erectile dysfunction supplements, manipulating her real bedroom video to make her appear to discuss nonexistent sexual health problems. The scammers used AI tools from companies like HeyGen and Eleven Labs to combine video and audio, generating synthetic versions of real people's voices and animating lip movements. Carrie Williams, a 46-year-old HR executive from North Carolina, had her TikTok video about kidney and liver failure manipulated with audio from adult film actress Lana Smalls to create a 30-second advertisement about men's penis sizes. On Chinese social media platforms, Olga Loiek, a 20-year-old University of Pennsylvania student from Ukraine, discovered nearly 5,000 videos had spread across Chinese social media sites featuring her AI-cloned likeness speaking Mandarin and promoting Russian products while praising Putin. Australian career strategist Shade Zahrai, with over 1.7 million TikTok followers, had her videos manipulated to appear as a Russian woman promoting China-Russia ties and selling Russian products. The technology requires only a few seconds of footage to create convincing deepfakes, and victims often discover the fraudulent content through followers or friends who alert them.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed