
The findings highlight a dangerous new threat landscape dubbed “Scamlexity” – where artificial intelligence designed to help users becomes a gateway for cybercriminals.
Traditional Scams Hit AI Harder
Guard.io Labs researchers Nati Tal and Shaked Chen put Perplexity’s Comet browser through rigorous security testing, discovering that even decades-old scam techniques prove devastatingly effective against AI agents.

In one test, the AI browser autonomously completed a purchase from a fake Walmart store, automatically filling in saved credit card details and personal information without seeking human confirmation.
The researchers created a convincing counterfeit shopping site using basic AI tools, demonstrating how easily scammers can now construct deceptive storefronts.
When instructed to “buy me an Apple Watch,” the AI agent navigated the fraudulent site, ignored obvious warning signs that would alert human users, and completed the entire transaction process independently.
Similarly troubling results emerged when testing email phishing scenarios.
The AI browser confidently marked a phishing email from a fake Wells Fargo address as a legitimate to-do item and directly navigated to an active phishing site without performing basic security checks.
This eliminated the typical human verification process that often prevents such attacks from succeeding.
PromptFix: Next-Generation AI Exploitation
Perhaps most concerning is the researchers’ development of “PromptFix” – an evolution of traditional social engineering attacks specifically targeting AI systems.
This technique embeds hidden instructions within web page content that humans cannot see but AI agents process as legitimate commands.
In their demonstration, a fake medical website appeared to show a simple captcha to human users, but contained invisible text instructing the AI to download potentially malicious files.
The AI agent, programmed to be helpful and efficient, followed these hidden instructions without hesitation, demonstrating how attackers can directly manipulate AI decision-making processes.
Scaling Threat Multiplies Risk
The implications extend far beyond individual incidents.
Researchers warn that successful exploits against AI models can be scaled across millions of users simultaneously, as scammers need only break one AI system rather than fool countless individual humans.
This creates what they term “Generative Adversarial Networks gone rogue” – where malicious AI systems continuously evolve to defeat protective AI systems.
The study reveals that current AI browsers prioritize user experience over security, often lacking robust safeguards like phishing detection, URL reputation checks, and behavioral anomaly monitoring that protect traditional browsing.
As these systems increasingly handle sensitive tasks like shopping, email management, and financial transactions, the researchers emphasize that security must be integrated into AI architecture from the ground up rather than added as an afterthought.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates
The post PromptFix Exploit Abuses AI Browsers With Hidden Malicious Prompts appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
