Categories: Cyber Security News

ASCII Smuggling Attack in Gemini Tricks AI Agents into Revealing Smuggled Data

A well-established attack technique ASCII smuggling, has resurfaced in enterprise AI agents, enabling attackers to embed invisible payloads in user prompts or calendar events.

FireTail’s research demonstrates that Google’s Gemini, Grok, and DeepSeek can be manipulated to bypass human oversight, leading to identity spoofing and automated data poisoning.

Background and Attack Technique

FireTail researcher Viktor Markopoulos revisited ASCII smuggling attacks against modern large language models (LLMs).

ASCII smuggling exploits invisible Unicode control characters, specifically “tag characters,s” to hide instructions within a seemingly benign text string.

While the user interface (UI) renders only the visible text, the AI agent’s raw input pre-processor ingests the hidden characters and executes smuggled commands. This discrepancy between the display layer and data layer is the root of the vulnerability.

Historically, similar methods, such as “Trojan Source,” used bidirectional-override characters to conceal malicious code in software repositories.

ASCII smuggling extends this threat into AI-driven workflows, weaponizing the gap between what humans see and what LLMs process.

Attack Demonstration and Affected LLMs

FireTail’s proof of concept against Gemini involved sending a calendar invite titled “Meeting” to a test account.

The visible title appeared innocuous, but embedded tag-block characters transformed the raw calendar event into:
Gemini’s assistant then read the manipulated prompt aloud, automatically marking the event as optional without any user action or approval.

Using a crafted payload, FireTail was able to overwrite meeting links and organizer details, effectively spoofing a corporate identity.

Testing across multiple platforms revealed that ChatGPT, Copilot, and Claude scrubbed input reliably, but Gemini, Grok, and DeepSeek did not.

As a result, enterprises relying on the vulnerable services face immediate risk.

Enterprise Impact: Spoofing and Data Poisoning

Vector A: Identity Spoofing via Calendar Integration

Attackers send calendar invites containing smuggled tag characters. The UI shows a normal event title, but the AI agent processes hidden instructions, altering organizer details and meeting descriptions. Victims never accept or decline; Gemini autonomously ingests and acts on the malicious data.

Sponsored

Vector B: Automated Content Poisoning

On e-commerce platforms, hidden commands in user reviews can force an AI summarizer to inject malicious links into customer-facing content.

A benign product review such as “Great phone. Fast delivery.” can be transformed into a summary promoting a scam website.

These scenarios highlight how ASCII smuggling turns AI agents into unwitting accomplices in enterprise attacks.

CVE Table

CVE ID Description Affected Products CVSS 3.1 Impact Exploit Prerequisites
CVE-2025-61347 ASCII smuggling in prompt processing Google Gemini (Google Workspace integration) 7.5 Identity spoofing Ability to send calendar or text input
CVE-2025-61348 ASCII smuggling in social media integrations Grok (X integration) 7.0 Data poisoning Ability to post or submit smuggled text
CVE-2025-61349 ASCII smuggling in data aggregation workflows DeepSeek 7.0 Poisoned summaries Ability to supply raw text inputs

FireTail reported ASCII smuggling vulnerabilities to Google on September 18, 2025, but received notice of “no action.”

In contrast, AWS published guidance for defending LLM applications against Unicode smuggling. With major vendors unwilling to patch, enterprises must deploy their own defenses.

FireTail’s solution focuses on observability at the ingestion layer:

  1. Ingestion – Record raw LLM input streams before any UI normalization.
  2. Analysis – Detect tag-block sequences and zero-width characters in logs.
  3. Alerting – Trigger “ASCII Smuggling Attempt” alerts upon detection.
  4. Response – Isolate sources and flag or block poisoned outputs in real time.

Monitoring raw payloads rather than visible text is the only reliable defense against this application-layer flaw.

Organizations using vulnerable AI integrations should implement deep observability controls immediately to mitigate identity spoofing and data poisoning risks.

Cyber Awareness Month Offer: Upskill With 100+ Premium Cybersecurity Courses From EHA’s Diamond Membership: Join Today

The post ASCII Smuggling Attack in Gemini Tricks AI Agents into Revealing Smuggled Data appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

Today’s Best Deals: Pokémon Legends: Z-A for Switch 2, Disney+ and Hulu Bundle, and Venomnibus Collection

Whether you’re looking to cosplay as Spider-Man or want to have arguably the best Venom…

15 minutes ago

The Mandalorian and Grogu Director Jon Favreau Compares Jeremy Allen White’s Rotta the Hutt to Adonis Creed

The Bear star Jeremy Allen White is making the jump to the Star Wars galaxy…

15 minutes ago

RingH23 Hackers Target MacCMS and CDN Infrastructure

A new cybercriminal campaign, linked to the notorious Funnull group, has targeted both Content Delivery…

45 minutes ago

Microsoft 365 Outage Hits North America as CDN Misconfiguration Disrupts Multiple Services

Microsoft is actively investigating a service disruption affecting multiple Microsoft 365 products for users in…

1 hour ago

Star Wars: Hasbro Reveals New Line of Maul – Shadow Lord Figures

With the premiere of Star Wars: Maul - Shadow Lord just weeks away, we're starting…

1 hour ago

The FlashForge AD5X Is One of the Best CoreXY Multi-Color 3D Printers Priced Under $300

One of the better regarded 3D printers with multi-color print capability is now priced well…

1 hour ago

This website uses cookies.