Cybersecurity researchers identified a large-scale investment scam called Nomani that uses AI-powered video testimonials featuring famous personalities, combined with social media malvertising and phishing websites, to defraud victims of money and personal data.
Cybersecurity researchers from ESET identified a new investment scam called Nomani that leverages AI-powered video testimonials featuring famous personalities to defraud victims. The scam grew by over 335% between H1 and H2 2024, with more than 100 new URLs detected daily between May and November 2024. The attack begins with fraudulent ads on social media platforms, often targeting previous scam victims with fake Europol and INTERPOL lures. These ads are published from fake and stolen legitimate profiles of small businesses, government entities, and micro-influencers. The links direct victims to phishing websites that visually imitate local news media, abuse organizational logos and branding, or advertise fake cryptocurrency management solutions with names like Quantum Bumex and Immediate Mator. Cybercriminals then use harvested contact information to directly call victims and manipulate them into investing in non-existent products that falsely show phenomenal gains. Victims are sometimes duped into taking out loans or installing remote access apps. When victims request payouts, scammers demand additional fees and personal information including ID and credit card details before disappearing with both money and data. Evidence suggests Russian-speaking threat actors are behind Nomani, based on Cyrillic source code comments and use of Yandex tracking tools. A separate but similar fraud network in South Korea defrauded nearly $6.3 million from victims using fake trading platforms, with 32 people arrested and over 20 servers seized.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed