Security researcher Eito Miyamura shared a post on the social media platform X last week demonstrating how it is possible to prompt-inject ChatGPT through Google Calendar invites. The cybersecurity expert exploited OpenAI’s new feature for Pro subscribers, which links ChatGPT to platforms such as Gmail and Google Calendar, enabling attackers to exfiltrate private data.

According to the post shared by Miyamura on September 12, malicious hackers can inject a jailbreak prompt into Google Calendar invites and make ChatGPT exfiltrate private information, even if the user hasn’t accepted the invitation.

“We just got ChatGPT to leak private email data to an attacker,” said Miyamura in a video. “And the craziest part is all they needed was your email address.”

Miyamura explained that after OpenAI added full support for Model Context Protocol (MPC) last week, allowing productivity tools to connect, it also enabled the AI agent to follow an attacker’s commands.

In three steps, the cybersecurity experts outlined how attackers could exploit the system. First, they send the victim a Google Calendar invite including the jailbreak prompt. Next, they wait for the user to use the new feature and ask ChatGPT about the upcoming events or request help planning their day. By asking the AI agent to access its Calendar, the victim makes ChatGPT read the malicious invite and respond to the attacker’s commands.

“For now, OpenAI only made MCPs available in ‘developer mode’, and requires manual human approvals for every session, but decision fatigue is a real thing, and normal people will just trust the AI without knowing what to do and click approve, approve, approve,” added Miyamura.

This is not the first security concern raised about ChatGPT agents. A few weeks ago, users revealed that the AI system was able to bypass Cloudflare’s “I Am Not a Robot” verification, sparking both technical and philosophical debate.