Categories: Cyber Security News

Claude AI Agents Close 186 Deals in Anthropic’s Marketplace Experiment

Anthropic’s “Project Deal” has demonstrated that AI agents can autonomously negotiate and close real-world transactions, but the experiment also surfaced a quiet, troubling asymmetry: not all AI representations are created equal.

In December 2025, Anthropic transformed its San Francisco office into a live classified marketplace, a Craigslist-style platform with a critical twist.

Rather than negotiating themselves, the company’s 69 employees handed the reins to Claude AI agents.

Each participant was first interviewed by Claude to capture their selling preferences, buying wish lists, and personal instructions.

Those inputs were then converted into custom system prompts, and the agents were set loose in the company’s Slack workspace, with zero human intervention thereafter.

The Slack channel cycled through agents, who autonomously posted listings, made counteroffers, and sealed deals on real physical items ranging from snowboards to bags of ping-pong balls.

Claude AI Agents Close 186 Deals

The results were striking. Across more than 500 listed items, Anthropic’s 69 AI agents closed 186 deals totaling just over $4,000.

These weren’t frictionless, one-click purchases; agents engaged in multi-turn price negotiations, showcasing contextual reasoning and personalization.

https://twitter.com/AnthropicAI/status/2047728378275381427?ref_src=twsrc%5Etfw

One agent pitched a bag of ping-pong balls as “perfectly spherical orbs of possibility,” and another, recalling a coworker’s casual mention of a snowboard brand in a prior chat, matched the exact model the buyer wanted.

Post-experiment surveys revealed that 46% of participants said they would pay for a similar service in the future, underscoring genuine user enthusiasm for AI-mediated commerce.

Beneath the success, Anthropic ran a secret parallel experiment, one that raises serious questions. Participants were randomly assigned either the flagship Claude Opus 4.5 or the lightweight Claude Haiku 4.5 as their negotiating agent, without being told which model represented them.

The performance gap was measurable and significant. Opus-represented sellers earned $2.68 more per item on average, while buyers saved $2.45 per item, and Opus users completed approximately 2.07 more deals overall.

Despite this disparity, post-experiment surveys showed that participants with weaker models were entirely unaware of their disadvantage.

The findings highlight a dual reality for agentic AI commerce. On the upside, AI agents can dramatically reduce friction in peer-to-peer trade while achieving outcomes that participants rate as fair.

On the downside, when both parties don’t use equally capable models, the smarter agent silently wins a dynamic that mirrors real-world information asymmetry and could, at scale, enable exploitation, manipulation, or AI-assisted scams.

Anthropic’s Project Deal is less a product launch and more a proof-of-concept warning: AI agents work in marketplaces, but fairness requires that everyone gets the same caliber of advocate.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

The post Claude AI Agents Close 186 Deals in Anthropic’s Marketplace Experiment appeared first on Cyber Security News.

rssfeeds-admin

Recent Posts

The Best Deals Today: Bravia 8 OLED TV, 4K Blu-rays, Super Monkey Ball Banana Rumble, and More

A new weekend has arrived, and today, you can save big on Apple AirTags, 4K…

39 seconds ago

This website uses cookies.