Categories: Cyber Security News

Malicious Browser Extensions Can Steal AI Chats in New “Prompt Poaching” Attack

A growing wave of malicious browser extensions is quietly harvesting sensitive AI chat data in a technique now dubbed “prompt poaching,” raising serious concerns for both individual users and enterprises.

For many users, interacting with AI assistants in a browser typically involves opening a dedicated tab and manually pasting content for analysis or summarization.

While this approach keeps AI interactions relatively isolated, it also limits usability. To address this gap, developers have introduced AI-powered browser extensions that can access and process content across multiple tabs, offering a more seamless and efficient experience.

Security researchers warn that attackers are exploiting the rising popularity of AI-powered browser tools to intercept and exfiltrate conversations without user awareness.

However, this convenience introduces significant security tradeoffs. These extensions often operate with broad permissions, allowing them to read page content, monitor activity, and interact with other tabs.

In compromised or malicious versions, this access becomes a powerful surveillance mechanism. Instead of simply assisting users, some extensions actively monitor AI-related browser sessions, capturing prompts and responses in real time.

Prompt Poaching in Action

According to findings from Secure Annex, dozens of recent incidents involve Chrome extensions performing covert data collection tied specifically to AI usage.

These extensions are designed to detect when users open AI platforms and then extract conversation data using methods such as DOM scraping or API interception. The stolen data is subsequently transmitted to attacker-controlled servers.

This behavior, termed “prompt poaching,” effectively turns productivity tools into data exfiltration channels. Given that users ხშირად input sensitive information into AI tools including business data, credentials, or proprietary code the implications are severe.

Attackers are leveraging two primary distribution methods. In many cases, they clone legitimate and popular extensions, embedding hidden malicious functionality.

Several identified samples mimic tools originally developed by AITOPIA, but include added code to capture AI conversations. These fake extensions often appear identical to trusted versions, making detection difficult for users.

In other cases, threat actors take over or modify existing extensions after they have gained a substantial user base.

A notable example is the Urban VPN Proxy extension, which initially functioned as advertised. After reaching widespread adoption, malicious capabilities were introduced, enabling the silent collection of AI chat data from unsuspecting users.

This supply chain-style attack is particularly dangerous because it targets already trusted software, bypassing the skepticism users might apply to new or unknown extensions.

Mitigations

The risks associated with prompt poaching extend beyond personal privacy. In enterprise environments, employees using such extensions may inadvertently expose intellectual property, internal communications, or customer data.

Stolen AI conversations could also be weaponized for targeted phishing campaigns or sold on underground marketplaces.

Because AI tools are increasingly integrated into daily workflows, the volume and sensitivity of captured data make these attacks highly valuable to threat actors.

Organizations are advised to take a proactive approach to browser extension management. Restricting the installation of unapproved extensions is a critical first step.

Security teams should enforce policies through browser management tools or group policies to ensure only vetted extensions are permitted.

Users should also be encouraged to rely on official AI tools provided directly by trusted vendors, rather than third-party extensions. Reviewing extension permissions before installation is equally important, especially when permissions exceed the stated functionality.

Regular audits of installed extensions, combined with monitoring for unusual outbound connections to unknown domains, can help detect suspicious activity early.

Additionally, identifying workflow gaps that drive users toward unauthorized tools can reduce reliance on potentially unsafe extensions.

As AI adoption accelerates, attackers are rapidly adapting their techniques to exploit new interaction layers. Prompt poaching highlights how even productivity-enhancing tools can become attack vectors, reinforcing the need for vigilant security practices in an AI-driven environment.

Follow us on Google NewsLinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.

The post Malicious Browser Extensions Can Steal AI Chats in New “Prompt Poaching” Attack appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

The 27″ KTC QHD 144Hz Gaming Monitor Drops to $83 During the Amazon Spring Sale

Shopping for a good gaming monitor but want to keep your budget under $100? On…

18 minutes ago

The Anyuse 16″ 1080p Portable USB Monitor Drops to $44 During the Amazon Spring Sale

It's no surprise why USB portable monitors are becoming so popular, especially with most people…

18 minutes ago

Anti-Trump ‘No Kings’ rallies draw thousands across Tennessee

Renea DeLong caries an American flag and white flower at the No Kings Rally in…

3 hours ago

Anti-Trump ‘No Kings’ rallies draw thousands across Tennessee

Renea DeLong caries an American flag and white flower at the No Kings Rally in…

3 hours ago

The Best Deals Today: My Hero Academia: All’s Justice, LEGO Star Wars R2-D2, Code Vein II, and More

A new weekend has arrived, and today, you can save big on Dragon Quest III…

3 hours ago

The Best Deals Today: My Hero Academia: All’s Justice, LEGO Star Wars R2-D2, Code Vein II, and More

A new weekend has arrived, and today, you can save big on Dragon Quest III…

3 hours ago

This website uses cookies.