Microsoft reported thwarting $4 billion in AI-powered fraud attempts between April 2024 and April 2025, as cybercriminals increasingly use artificial intelligence to create fake websites, job postings, and tech support scams at unprecedented speed and scale.
Between April 2024 and April 2025, Microsoft's security teams detected and prevented widespread AI-enhanced fraud campaigns targeting consumers and businesses globally. The company thwarted $4 billion in fraud attempts, rejected 49,000 fraudulent partnership enrollments, and blocked approximately 1.6 million bot signup attempts per hour. Cybercriminals were using AI tools to rapidly create convincing fake e-commerce websites, complete with AI-generated product descriptions, customer reviews, and chatbots that could be set up in minutes rather than the days or weeks previously required. The fraud operations included fake job postings with AI-generated interviews, tech support scams using social engineering tactics, and sophisticated phishing campaigns. Microsoft observed significant activity originating from China and Germany, with attackers exploiting AI to scrape company information and build detailed target profiles. The company responded by implementing AI-powered detection systems, enhancing Quick Assist with warning messages and blocking an average of 4,415 suspicious connection attempts daily, and developing domain impersonation protection using deep learning technology. Microsoft also worked with law enforcement and the Global Anti-Scam Alliance to disrupt criminal infrastructure and educate consumers about emerging AI-powered fraud tactics.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed