Malicious actors created and distributed hundreds of fake OpenClaw AI skills that appeared legitimate but contained hidden malware, credential theft capabilities, and cryptocurrency wallet hijacking code, affecting users who installed these compromised automation tools.
In February 2026, cybersecurity researchers at Bitdefender Labs discovered that approximately 17% of OpenClaw AI skills analyzed contained malicious behavior. OpenClaw is an open-source AI execution engine with over 160,000 GitHub stars that uses modular 'skills' (small code pieces) to automate workflows and interact with online services on behalf of users. Malicious actors created fake skills that impersonated legitimate crypto trading tools, wallet helpers, social media utilities, and productivity applications. These malicious skills used techniques like Base64 encoding to hide shell commands that downloaded additional malware payloads from external servers, particularly from IP address 91.92.242.30. The attacks specifically targeted crypto-focused workflows, with 54% of malicious skills being crypto-related, including Solana, Binance, Phantom wallet, and Polymarket tools. Some skills delivered AMOS Stealer malware on macOS systems, while others functioned as credential exfiltration tools that scanned for private keys in .mykey files and transmitted them to attacker-controlled endpoints. The malicious skills were distributed at scale through cloning and republishing under slight name variations, with one user 'sakaen736jih' associated with 199 malicious skills. Bitdefender reported that hundreds of cases have been detected in corporate environments, expanding beyond consumer impact.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.