Attackers used stolen cloud credentials to illegally access and abuse large language model (LLM) services across ten different AI platforms, potentially costing victims over $46,000 per day in unauthorized usage fees.
The Sysdig Threat Research Team discovered a new attack called 'LLMjacking' where cybercriminals used stolen cloud credentials to target ten cloud-hosted large language model services. The attackers initially gained access through a vulnerable Laravel system (CVE-2021-3129), then exfiltrated cloud credentials to access cloud environments hosting LLM models, specifically targeting a local Claude (v2/v3) model from Anthropic on AWS Bedrock. The attackers used automated scripts to check credentials across AI21 Labs, Anthropic, AWS Bedrock, Azure, ElevenLabs, MakerSuite, Mistral, OpenAI, OpenRouter, and GCP Vertex AI. They employed techniques like intentionally triggering validation errors to confirm model access without immediate detection, querying logging configurations to avoid monitoring, and using reverse proxy servers to sell access to other cybercriminals. The attack volume increased significantly, with over 85,000 requests detected in July 2024 alone, including 61,000 requests in a single three-hour window. Attackers were observed using the stolen access for various purposes including bypassing sanctions (particularly by Russian users), role-playing conversations, image analysis, and selling access to banned users. If undiscovered, such attacks could result in over $46,000 in LLM consumption costs per day for victims using Claude 2.x, with costs potentially reaching over $100,000 daily for newer models like Claude 3 Opus.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed
No population impact data reported.