Cato CTRL security researchers discovered a threat actor called ProKYC selling a deepfake tool that uses AI to create fake government documents and facial recognition videos to bypass two-factor authentication on cryptocurrency exchanges for account fraud.
Cato CTRL security researchers discovered a threat actor named ProKYC selling a sophisticated deepfake tool in the cybercriminal underground targeting cryptocurrency exchanges. The AI-powered tool creates fake government-issued documents (such as Australian passports) and generates corresponding deepfake videos that can pass facial recognition challenges during account verification processes. The tool enables New Account Fraud (NAF) by allowing criminals to create verified but synthetic accounts on cryptocurrency exchanges, which can then be used for money laundering operations, mule accounts, and other forms of fraud. According to AARP, new account fraud accounted for more than $5.3 billion in losses in 2023, up from $3.9 billion in 2022. ProKYC demonstrated the tool's effectiveness against ByBit cryptocurrency exchange, showing how it creates AI-generated faces, applies credentials to high-quality forged documents with official stamps, generates videos that follow facial recognition instructions (moving head left and right), and successfully bypasses the exchange's verification system. The tool received positive feedback from cybercriminals and represents a new level of sophistication in financial fraud attacks. Detection is challenging because overly restrictive biometric systems can create false positives, while lax controls enable fraud.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.