AI tools including ChatGPT are being used by scammers to create more sophisticated and convincing fraud schemes, enabling them to target larger groups with personalized messages and bypass traditional fraud detection methods.
The report describes how artificial intelligence tools, including ChatGPT, are being used by criminals to enhance fraud and scam operations. AI enables scammers to create better-written, more convincing messages that lack the poor grammar and obvious errors that previously helped people identify scams. The technology allows fraudsters to generate personalized content, imitate voices and identities, and conduct social engineering attacks with greater sophistication. People reported losing a record $10 billion to scams in 2023, up from $9 billion the previous year according to the FTC, with estimates suggesting actual losses could be closer to $200 billion since only 5% of victims report losses. Specific cases mentioned include Joey Rosati, who nearly fell for a jury duty scam where criminals used his personal information to create a convincing impersonation of a police officer, and David Wenyu, who was targeted by a fake job opportunity scam. A survey by Biocatch found that 70% of fraud management officials believe criminals are more skilled at using AI for financial crime than banks are at using it for prevention. AI tools help scammers automate password testing across platforms, write malicious code, and create more convincing phishing attempts. In response, financial institutions are implementing AI-based detection systems and enhanced behavioral monitoring to identify fraudulent activities.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed