Simple Custom Font Rendering Can Poison ChatGPT, Claude, Gemini, and Other AI Systems

Simple Custom Font Rendering Can Poison ChatGPT, Claude, Gemini, and Other AI Systems
Simple Custom Font Rendering Can Poison ChatGPT, Claude, Gemini, and Other AI Systems
A novel attack technique that exploits a fundamental blind spot in AI web assistants the gap between what a browser renders for a user and what an AI tool actually reads from the underlying HTML.

Using nothing more than a custom font file and basic CSS, attackers can silently deliver malicious instructions to users while AI safety checks see only harmless content.

The attack, tested in December 2025, exploits a structural disconnect between a webpage’s DOM text and its visual rendering. When an AI assistant analyzes a webpage, it parses the raw HTML structure.

But the browser renders that same page through a visual pipeline, one that interprets fonts, CSS, and glyph mappings to produce what the user actually sees on screen. Attackers can weaponize the space between these two views.

LayerX demonstrated this by building a proof-of-concept page that appeared to visitors as a Bioshock video game fanfiction site. Hidden beneath that facade was a custom font acting as a visual substitution cipher.

The font was engineered to display normal HTML text video game fanfiction as 1-pixel, background-colored gibberish invisible to the user, while rendering a separate encoded payload as readable, large green text urging the user to execute a reverse shell on their own machine.

Every AI Assistant Failed

Every non-agentic AI assistant tested, including ChatGPT, Claude, Copilot, Gemini, Grok, Perplexity, and others, failed to detect the threat and instead confirmed the page was safe. In many cases, assistants even encouraged users to follow the malicious on-screen instructions.

This attack requires no JavaScript, no exploit kit, and no browser vulnerability. The browser behaves exactly as designed. The flaw lies in AI tools that treat DOM text as a complete representation of what users see, when in reality, the rendering layer can carry an entirely different message.

ywAAAAAAQABAAACAUwAOw==
Attack Flow (Source: LayerX)

LayerX followed responsible disclosure procedures and reported the findings to all major AI vendors in December 2025. The responses revealed a concerning gap in how AI security is defined:

Vendor Response
Microsoft Accepted the report; full 90-day remediation period requested
Google Initially assigned P2 (High) priority, then de-escalated and closed Jan. 27, 2026
OpenAI Rejected as “out of scope” — insufficient impact for triage
Anthropic Rejected as social engineering, explicitly out of scope
xAI Rejected; directed to safety@x.ai
Perplexity Classified as a known LLM limitation, not a security vulnerability

Microsoft was the only vendor to fully address the issue and engage the complete disclosure timeline.

The most immediate risk is AI-assisted social engineering when an attacker tricks an AI into vouching for a malicious page; they effectively borrow the AI’s trusted reputation to manipulate the user.

As AI copilots and browser assistants become deeply embedded in enterprise security workflows, these text-only analysis tools create blind spots that attackers can reliably exploit.

LayerX recommends that AI vendors implement dual-mode render-and-diff analysis, treat custom fonts as potential threat surfaces, scan for CSS-based content hiding techniques (such as near-zero opacity and color-matched text), and, critically, avoid issuing confident safety verdicts when they cannot verify a page’s full rendering context.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post Simple Custom Font Rendering Can Poison ChatGPT, Claude, Gemini, and Other AI Systems appeared first on Cyber Security News.


Discover more from RSS Feeds Cloud

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from RSS Feeds Cloud

Subscribe now to keep reading and get access to the full archive.

Continue reading