ChatGPT’s New Support for MCP Tools Let Attackers Exfiltrate All Private Details From Email
The attack requires only the victim’s email address and leverages a malicious calendar invitation to hijack the AI agent.
On Wednesday, OpenAI announced that ChatGPT would begin supporting Model Context Protocol (MCP) tools, an innovation from AnthropicAI designed to let AI agents connect with and read data from a user’s personal applications.
This includes widely used services such as Gmail, Google Calendar, Sharepoint, and Notion. While this integration is designed to enhance productivity, it introduces a significant security vulnerability rooted in the fundamental nature of AI agents.
These models are designed to follow commands precisely but lack the common-sense judgment to distinguish between a legitimate user request and a malicious, injected prompt.
This makes them susceptible to attacks that can turn the AI against the user it is supposed to assist.
Eito Miyamura demonstrated a simple yet effective method to exploit this integration. The attack begins when a threat actor sends a specially crafted calendar invitation to a victim’s email address.
This invitation contains a hidden “jailbreak” prompt designed to give the attacker control over the victim’s ChatGPT session. The victim does not even need to see or accept the invitation for the attack to proceed.
The next step relies on a common user action: asking ChatGPT to help prepare for their day by reviewing their calendar. When the AI scans the calendar, it reads the data from the malicious invitation.
The jailbreak prompt is then executed, effectively hijacking the AI. Now under the attacker’s control, ChatGPT follows the embedded commands, which can instruct it to search through the victim’s private emails for sensitive information and exfiltrate that data to an email address specified by the attacker.
For now, OpenAI has limited the MCP feature to a “developer mode” and implemented a safeguard that requires manual user approval for every session.
However, this relies on user vigilance, which is often undermined by a psychological phenomenon known as decision fatigue. In practice, users are likely to become accustomed to the approval prompts and will repeatedly click “approve” without fully understanding the permissions they are granting.
Integrating these tools with sensitive personal data poses a serious security risk that requires more robust safeguards than simple user approvals.
Find this Story Interesting! Follow us on Google News, LinkedIn, and X to Get More Instant Updates.
The post ChatGPT’s New Support for MCP Tools Let Attackers Exfiltrate All Private Details From Email appeared first on Cyber Security News.
EastIdahoNews.com file photo BOISE (Idaho Statesman) — Donald Powell, the mayor of a small town…
A persistent bug in Windows 11 in-place upgrades is reportedly wiping critical 802.1X wired authentication…
Google’s Threat Intelligence Group (GTIG) has uncovered Coruna, a sophisticated iOS exploit kit containing 23…
Former state and national GOP Chair Michael Whatley (left) and former Gov. Roy Cooper are…
U.S. Sen. Thom Tillis, Republican of North Carolina, speaks as Homeland Security Secretary Kristi Noem…
Diana Fenton has withdrawn her name from consideration to be New Hampshire’s next child advocate…
This website uses cookies.