The FunkSec ransomware group, emerging in late 2024, used AI-assisted development to create malware that attacked over 85 victims globally, demanding ransoms and stealing data with the help of artificial intelligence tools.
FunkSec is a ransomware group that emerged in late 2024 and claimed responsibility for attacks on more than 85 victims in December 2024, surpassing all other ransomware groups that month. The group operates under a ransomware-as-a-service (RaaS) business model and engages in double extortion tactics, combining data theft with encryption to pressure victims into paying ransoms. The malware was written in Rust and was likely created with the help of AI by an inexperienced malware developer from Algeria, who also uploaded some of the ransomware's source code online. Check Point's investigation revealed that the group's members extensively leveraged AI to enhance their capabilities, with their public script offerings including extensive code comments with perfect English, likely generated by an LLM agent. The group demands unusually low ransoms, sometimes as little as $10,000, and sells stolen data to third parties at reduced prices ranging from $1,000 to $5,000. Most victims are located in the U.S., India, Italy, Brazil, Israel, Spain, and Mongolia. The group launched its data leak site in December 2024 and also released an AI chatbot based on Miniapps to support their malicious operations. FunkSec's activities straddle the line between hacktivism and cybercrime, with some members previously engaged in hacktivist activities and claiming to align with the 'Free Palestine' movement.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed