A hacker compromised Amazon's Q AI coding assistant by injecting malicious code through a GitHub pull request that instructed the AI to wipe user systems and cloud resources, which was unknowingly included in a public release downloaded nearly a million times.
Amazon's Q AI coding assistant, part of AWS's AI developer suite, was compromised when a hacker successfully submitted a malicious pull request to the Amazon Q GitHub repository. The injected code contained prompt engineering instructions directing the AI agent to 'clean a system to a near-factory state and delete file-system and cloud resources.' This malicious update passed Amazon's verification process and was included in version 1.84.0 of the Amazon Q Developer for Visual Studio Code Extension, which was publicly released in July and downloaded nearly a million times. The compromise occurred due to an inappropriately scoped GitHub token in Amazon's CodeBuild configuration that allowed the threat actor to commit malicious code directly into the extension's open-source repository. AWS discovered the issue and determined that while the malicious code was distributed with the extension, it failed to execute due to a syntax error, preventing actual damage to customer systems. Amazon quickly mitigated the issue by revoking credentials, removing the malicious code, and releasing version 1.85.0, while also removing the compromised version 1.84.0 from distribution channels. The incident was assigned CVE-2025-8217 and highlighted concerns about AI tool security and Amazon's development practices.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed