North Korean-linked group BlueNoroff conducted cyber attacks using AI-generated deepfakes of company executives in fake Zoom meetings to distribute malware to Web3 industry employees, while thousands of North Korean IT workers used stolen identities and AI-assisted techniques to obtain remote employment at U.S. companies to generate revenue for the regime.
Multiple cybersecurity incidents involved North Korean threat actors using AI technologies for malicious purposes. The BlueNoroff group, linked to North Korea's Lazarus Group, conducted sophisticated phishing operations targeting Web3 industry employees through deepfake Zoom calls featuring AI-generated versions of company executives. When victims reported audio issues, attackers shared malicious AppleScript files disguised as Zoom extensions, leading to installation of backdoors, keyloggers, and cryptocurrency stealing malware. Separately, extensive investigations revealed that thousands of North Korean IT workers have infiltrated major U.S. companies using stolen identities, with some using AI face-changing software during video interviews and AI assistants to answer questions in real-time. These workers generated hundreds of millions of dollars for the North Korean regime, with individual cases involving workers earning over $250,000 annually. The schemes involved over 300 U.S. companies including Fortune 500 firms, with facilitators operating 'laptop farms' in the U.S. to make workers appear domestically located. Some workers also engaged in extortion, threatening to release stolen company data if not paid additional ransoms. Federal investigations led to multiple arrests and indictments, with one case involving 14 North Korean nationals who generated at least $88 million over six years.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed