North Korean hackers are using AI tools like ChatGPT to enhance their cybercriminal operations, creating sophisticated phishing campaigns on LinkedIn and other platforms to steal sensitive information and cryptocurrency to fund nuclear weapons programs.
North Korean cyber criminals, including groups like Kimsuky (Emerald Sleet), are leveraging artificial intelligence tools such as ChatGPT and other large language models to enhance their cybercriminal operations. Microsoft and OpenAI confirmed that North Korean hackers are using AI services to support malicious cyber activities, creating credible-looking recruiter profiles on LinkedIn and other social media platforms to conduct sophisticated phishing campaigns. The hackers use generative AI to improve their English language skills, generate convincing messages, create fake identities and images, and develop more sophisticated malware. One specific case involved targeting a senior engineer at a Japanese cryptocurrency exchange through a fake Singapore recruiter profile that led to spyware infection. South Korea detected over 1.62 million hacking attempts last year, with more than 80 percent traced back to North Korea. The attacks target employees of global defense, cybersecurity, and cryptocurrency companies across platforms including Facebook, WhatsApp, Discord, and Telegram. According to UN experts, money raised from these criminal cyber operations helps fund North Korea's ballistic missile and nuclear weapons programs. Microsoft and OpenAI have disabled all accounts and assets associated with these threat actors.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed