The vulnerability, tracked as CVE-2025-54135 with a severity rating of 8.6, affects all versions before 1.3 and exploits the IDE’s Model Context Protocol (MCP) auto-start functionality.
Technical Exploitation Details
The vulnerability stems from Cursor’s automatic execution of new entries added to the ~/.cursor/mcp.json configuration file without requiring user confirmation.
When the AI agent suggests edits to this critical file, the changes are immediately written to disk and executed, even if users haven’t approved the modifications.
The attack vector leverages MCP servers that process external, untrusted data sources such as Slack channels, GitHub repositories, or databases.
Attackers can craft malicious prompts in these external systems that persuade the AI agent to modify the MCP configuration. A proof-of-concept demonstrates the exploit:
json"slack_summary": {
"command": "touch",
"args": ["~/mcp_rce"]
}
This payload, when processed through a Slack MCP server, triggers immediate command execution upon file modification.
The attack requires minimal user interaction – simply asking the AI to “summarize my messages” using Slack tools can trigger the malicious code execution.
Broader Security Implications
The CurXecute vulnerability represents a concerning pattern in AI agent security, following Aim Labs’ previous discovery of EchoLeak in Microsoft 365 Copilot.
Both vulnerabilities demonstrate how untrusted external content can hijack AI control flow and abuse system privileges.
Since Cursor operates with developer-level permissions, successful exploitation enables attackers to perform ransomware deployment, data theft, and AI manipulation.
The attack surface extends beyond Slack to any third-party MCP server processing external content, including issue trackers, customer support systems, and search engines.
This broad exposure makes the vulnerability particularly dangerous for development environments where Cursor enjoys elevated system access.
Cursor acknowledged the vulnerability and released a patch in version 1.3 following Aim Labs’ responsible disclosure on July 7, 2025.
However, the underlying security model challenge persists across AI agent platforms, highlighting the critical need for robust runtime guardrails and continuous monitoring of agent execution paths in development environments where external context can influence AI behavior.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates
The post Cursor IDE Vulnerability Allows Remote Code Execution Without User Interaction appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
