Scammers used AI-generated audio recordings of family members' voices to conduct kidnapping scams, falsely claiming they had kidnapped the family member and demanding ransom payments from two individuals in the Highline community.
Two individuals in the Highline community were targeted by kidnapping scams involving AI-generated audio. The scammers falsely claimed they had kidnapped a family member and played AI-generated audio recordings of the supposed victim's voice to make the threat seem credible. The scammers then demanded ransom payments from the targets. The Federal Bureau of Investigation has noted a nationwide increase in these types of scams, with particular focus on families who speak languages other than English. The scammers typically use phone numbers from different area codes and attempt to keep victims on the phone to prevent them from contacting their loved ones or authorities. The report does not specify the AI system used, its developer, or the exact financial losses incurred. The incidents were discovered when the targeted individuals reported them to their community. No specific details are provided about the scale of harm beyond the two reported cases, though the FBI indicates this is a growing nationwide problem.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed