Cybercriminals began using OpenAI's ChatGPT AI chatbot to develop malicious code including information stealers, encryption tools for potential ransomware, and dark web marketplace scripts within weeks of its November 2022 launch.
In November 2022, OpenAI released ChatGPT, an AI-driven natural language processing chatbot that can assist with writing code, emails, and essays. Within weeks of its launch, cybersecurity researchers at Check Point Research discovered that participants in underground hacking forums were using ChatGPT to create malicious software and tools for cybercrime. The researchers found three main categories of misuse: First, a forum participant created a Python-based information stealer that searches for common file types like Office documents and PDFs, copies them to a temporary directory, compresses them, and uploads them to an FTP server. Second, a user dubbed USDoD created an encryption tool using various cryptographic functions including elliptic curve cryptography, Blowfish, and Twofish algorithms that could potentially be modified into ransomware. Third, cybercriminals demonstrated how to create dark web marketplace scripts that use third-party APIs to retrieve current cryptocurrency prices for trading illegal goods. The researchers noted that many of these cybercriminals had little to no coding experience, with one stating it was the first script they had ever created. While OpenAI's terms of service prohibit using ChatGPT for illegal purposes, the AI tool was able to generate functional malicious code that could be easily modified for harmful purposes.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to develop cyber weapons (e.g., by coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or chemical, biological, radiological, nuclear, and high-yield explosives), or use weapons to cause mass harm.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.