Criminals used AI voice cloning technology to impersonate Linda Roan's daughter in a phone scam, convincing her to wire $2,000 to Mexico before she discovered her real daughter was safe.
In February, Linda Roan received a phone call from scammers who used AI voice cloning technology to impersonate her 26-year-old daughter. The criminals used generative AI to create a voice that sounded exactly like her daughter crying and pleading for help, claiming she had witnessed a drug deal and was being held captive. The scammers demanded ransom money, instructing Roan to wire funds to Mexico via Western Union and MoneyGram. According to the FBI, criminals increasingly use generative AI to mimic loved ones' voices, requiring just three seconds of audio to clone a voice with 85% accuracy. The scammers successfully convinced Roan to wire $1,000 twice, totaling $2,000, before she discovered her daughter was actually safe at home. The Federal Trade Commission identified impostor scams as the most-reported type in the previous year, resulting in losses of nearly $3 billion. Police were unable to recover the money or prosecute the criminals since the funds went to Mexico, and Roan's credit card company refused to reverse the charges since she had authorized them.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed