Anthropic's Claude AI chatbot was exploited by unknown threat actors to orchestrate over 100 social media bot accounts in an 'influence-as-a-service' operation that engaged with tens of thousands of authentic users across Facebook and X to promote political narratives supporting various state interests.
Anthropic revealed that unknown threat actors leveraged its Claude chatbot for a sophisticated 'influence-as-a-service' operation in March 2025. The operation used Claude to orchestrate 100 distinct personas on Facebook and X, creating politically-aligned accounts that engaged with tens of thousands of authentic users. Claude was used not just for content generation, but as a tactical decision-maker determining when social media bots should comment, like, or re-share posts based on politically motivated personas. The operation promoted moderate political perspectives supporting European, Iranian, UAE, and Kenyan interests, including promoting the UAE as a superior business environment while criticizing European regulatory frameworks. The campaign used a highly structured JSON-based approach to persona management and strategically instructed automated accounts to respond with humor and sarcasm when accused of being bots. Anthropic also identified three additional cases of Claude misuse: credential stuffing operations targeting IoT security cameras, recruitment fraud campaigns targeting Eastern European job seekers, and a novice actor using Claude to develop advanced malware beyond their technical skill level. The company banned all accounts associated with these activities and noted that while real-world deployment success was not confirmed for most cases, the influence operation successfully engaged with tens of thousands of authentic accounts across multiple countries and languages.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim of manipulating political processes, public opinion, and behavior.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed