Categories: Cyber Security News

Microsoft Copilot Flaw in Email and Teams Summaries Opens Door to Phishing Attacks

Artificial intelligence assistants are rapidly transforming the way organizations manage communication.

Tools like Microsoft Copilot help employees summarize long emails, analyze conversations, and extract key insights from Microsoft 365 applications such as Outlook and Teams.

Sponsored

While these features improve productivity, security researchers are now warning that the same capabilities can be abused by attackers to deliver highly convincing phishing attacks.

Recent research from security firm Permiso has revealed a vulnerability in Microsoft Copilot’s email and message summarization features that could allow attackers to manipulate AI-generated responses using hidden instructions embedded inside emails.

The technique, known as Cross Prompt Injection Attack (XPIA), tricks the AI into executing malicious instructions that are invisible to the user.

How Cross Prompt Injection Works

The vulnerability targets Copilot’s ability to summarize emails and conversations. Normally, Copilot reads the content of a message and generates a concise summary for the user.

However, researchers found that attackers can hide malicious prompts inside the email using simple HTML or CSS formatting techniques.

These hidden prompts remain invisible to the human reader but are still processed by the AI model during summarization.

As a result, Copilot may interpret the attacker’s hidden instructions as legitimate system guidance.

In testing scenarios, researchers demonstrated that the AI could be manipulated to generate fake alerts, security warnings, or phishing messages inside the Copilot summary panel.

For example, an attacker could send a harmless-looking email containing hidden prompts instructing Copilot to append a message such as “Unusual account activity detected.

Verify your identity immediately.” The summary generated by Copilot could then include a malicious link controlled by the attacker.

Researchers tested the prompt injection technique across multiple Microsoft interfaces and discovered varying levels of protection.

  • Outlook “Summarize” Button: This inline feature often detects suspicious instructions and blocks them. However, longer and more realistic prompts can sometimes bypass the filtering mechanisms.
  • Outlook Copilot Pane: The side-panel chat interface appears more cautious and frequently ignores injected prompts or refuses to respond.
  • Teams Copilot: In many cases, the Teams integration was more susceptible and generated summaries that included attacker-controlled instructions.

These inconsistencies highlight the challenge of securing AI-driven interfaces that interpret untrusted content.

Sponsored

One of the most dangerous aspects of this vulnerability is what researchers call “trust transfer.” Users are generally trained to be suspicious of links directly embedded in emails.

Phishing Lure (Source: permiso)

However, when the same link appears in a clean, AI-generated summary produced by a trusted assistant like Copilot, users may perceive it as legitimate.

This dramatically increases the likelihood that victims will click on malicious links or follow fake instructions presented by the AI-generated summary.

Unusual Activity Detected (Source: Permiso)

Beyond phishing, the vulnerability could potentially lead to sensitive data exposure. Microsoft 365 Copilot has access to organizational resources such as Teams chats, SharePoint documents, and OneDrive files.

A malicious prompt injection could instruct Copilot to retrieve internal information and embed it into a link or summary that directs data to attacker-controlled infrastructure.

If a user interacts with the generated content, confidential information could be unintentionally exposed.

Security teams can reduce the risk of AI-assisted phishing by implementing several defensive measures:

  • Educate employees that AI-generated summaries may contain manipulated content and should be reviewed carefully.
  • Enforce strict Data Loss Prevention (DLP) policies to limit what information Copilot can access and summarize.
  • Monitor abnormal cross-application data access patterns triggered from email content.
  • Deploy advanced email filtering to detect and remove hidden HTML or CSS blocks used in prompt injection attacks.
  • Enable Safe Links and web filtering policies to block connections to suspicious or unknown domains.

As AI assistants become deeply integrated into workplace communication platforms, organizations must recognize that these tools also introduce new attack surfaces.

The Copilot prompt injection issue highlights the growing need to secure AI workflows against adversarial manipulation.

Follow us on Google News , LinkedIn and X to Get More Instant UpdatesSet Cyberpress as a Preferred Source in Google.

The post Microsoft Copilot Flaw in Email and Teams Summaries Opens Door to Phishing Attacks appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

Google Pixel 10A review: Just buy the 9A

I'm not entirely sure why the Pixel 10A exists. Google hasn't upgraded the chipset, cameras,…

38 minutes ago

Backbone’s versatile pro controller is nearly matching its best price to date

Mobile gaming has come a long way over the course of the last decade or…

38 minutes ago

Adobe will pay $75 million to settle US cancellation fee lawsuit

Adobe says it will pay $75 million to resolve a lawsuit filed by the US…

38 minutes ago

“If We Know People Want It, Never Say Never” – The Simpsons Showrunner Offers New Hope for Hit & Run Sequel

The Simpsons: Hit & Run remains one of the most beloved spinoffs in the franchise's…

47 minutes ago

Amazon Raises Prices for Ad-Free Streaming Tier, Rebrands It Prime Video Ultra

Amazon is raising prices for Prime Video’s ad-free tier, which is also being rebranded as…

47 minutes ago

Official Xbox Wireless Controllers Just Dropped to $38.99 on Lenovo and Amazon

Lenovo is offering the lowest prices of the year on Xbox Series X wireless controllers,…

48 minutes ago

This website uses cookies.