In late July 2025, a malicious actor exploited AWS’s AI coding assistant, Amazon Q Developer, by submitting a GitHub pull request that injected destructive system commands into version 1.84.0 of the Visual Studio Code extension. The update was merged and published without detection—exposing nearly a million users to malicious code capable of wiping local files and AWS resources.
How the Attack Was Carried Out
- The attacker, using a GitHub account with no prior contribution history, gained admin-level access and embedded a prompt directing Q to delete files and terminate AWS resources—such as S3 buckets, EC2 instances, and IAM users.
- Amazon unknowingly included this version in their public release before identifying and pulling it.
- Amazon has confirmed that no customer resources were impacted. Version 1.85 was released immediately as a patch, and the compromised version was removed.
Why This Matters: AI Supply Chain Risk Spotlighted
- The incident reveals dangerous supply chain vulnerabilities: AI tools built on open-source contributions can be compromised if not rigorously vetted.
- According to researchers, Amazon Q had previously shown susceptibility to prompt injection and jailbreaking, with success rates signaling high security risk zones.
- This breach underscores the importance of code governance in AI development tools and raises broader concerns around AI autonomy and trust in DevOps.
Post-Mortem: Technical Oversights & Mitigation Gaps
- Security analysts criticized Amazon’s insufficient oversight of repository contributions and lack of comprehensive CI/CD anomaly detection.
- Experts urge enterprises to adopt immutable release pipelines, enforce hash verification, and implement stricter code reviews—especially when AI agents influence runtime behavior.
✅ Key Takeaways
| Issue | Implication |
|---|---|
| Security breach | Malicious prompt could delete system/cloud data—revealed gaps in internal safeguards |
| Tool credibility hurt | Community trust in Amazon Q’s reliability and accuracy has declined sharply |
| Need for AI governance | Highlighted urgency of securing AI development pipelines and enforcing vetting |
✅ Bottom Line
This breach reveals how even major cloud providers can mismanage AI agent governance, resulting in serious security exposures. As AI coding assistants become integrated into developer workflows, providers must ensure secure contribution pipelines, transparent auditing, and incident response. For enterprises, this serves as a stark reminder: AI autonomy must be balanced with strict DevSecOps controls.


