FIRESIDE CHAT: Leaked secrets are now the go-to attack vector — and AI is accelerating exposures

FIRESIDE CHAT: Leaked secrets are now  the go-to attack vector — and AI is accelerating exposures
A consequential shift is underway in how enterprise breaches begin. The leaked credential — once treated as a hygiene problem — has become the primary on-ramp.

Related: No easy fixes for AI risk

ywAAAAAAQABAAACAUwAOw==Last August’s Salesloft campaign was the pattern in miniature. Stolen OAuth tokens from one chatbot vendor pulled Salesforce data from 760 enterprise instances — Cloudflare, Cisco, Palo Alto Networks, and TransUnion among them, according to Mandiant. Google’s Threat Intelligence Group reported the primary intent: credential harvesting, each stolen key the path into the next victim.

That is the shape of the modern enterprise breach, says Dwayne McDaniel, senior developer advocate at non-human identity security firm GitGuardian, whom I interviewed at RSAC 2026. Each leaked credential, he explained, is a key to a door behind which sit more keys.

Leaks spiking

GitGuardian scans every public GitHub commit — every new batch of developer code published to a shared repository — for hard-coded secrets: credentials typed directly into source code. Its latest report documented 28.6 million such exposures in 2025 alone — a 34 percent year-over-year jump, the largest in five years. Private repositories ran six times worse.

And 64 percent of the credentials leaked in 2022 remain active today. GitGuardian emails developers the moment an exposed credential hits GitHub. The alerts go out. The credentials are rarely revoked. This is not a detection problem. It is a remediation problem.

ywAAAAAAQABAAACAUwAOw==AI is steepening the curve. Eight of the ten fastest-growing leaked-secret categories in 2025 traced directly to AI infrastructure. OpenRouter API keys — used to wire large language models into applications — jumped 48x year over year. DeepSeek keys were up 23x.

What’s a developer?

Developers are no longer alone in producing that code. GitGuardian found that commits where Claude Code served as co-signer — meaning a developer let AI complete the cycle without review — contained secrets at 33 percent. The baseline for all commits: 1.5 percent.

ywAAAAAAQABAAACAUwAOw==McDaniel’s own boss, a chief marketing officer, is now producing code. Executives and marketing leaders with no programming background are building production systems with live credentials embedded, because the tools make it that easy. “The developer becomes the AI,” McDaniel told me, “and you become the production manager.”

The fix isn’t purely technical. Gartner’s 2026 IAM Summit flagged machine identity as among the least mature areas in enterprise security. Workload identity frameworks like SPIFFE, in production at Uber and State Farm, are replacing API keys one system at a time. The tools exist.

Governance needed

What has to come first is the governance conversation. Some forty regulatory standards already name the requirement plainly, McDaniel notes: unauthorized access. Who can reach what, can you prove it, and what did you do about it. That framing doesn’t require new vocabulary. It requires the conversation to happen.

Organizations that treat AI-assisted code as finished work will ship faster this quarter. They will also ship the next class of exposures buried inside it.

Yet McDaniel is hopeful. Standards bodies have stopped treating credential abuse as tomorrow’s problem. The IETF, the CNCF, and the OpenID Foundation have active work on machine identity, workload authentication, and agentic AI governance. The tools are arriving. Whether governance arrives in time is the open question.

For a full drill down, please give a listen to the accompanying Fireside Chat podcast.

I’ll keep watch and keep reporting.

Byron sepia hedcut 1 100x139 7

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)

The post FIRESIDE CHAT: Leaked secrets are now the go-to attack vector — and AI is accelerating exposures first appeared on The Last Watchdog.


Discover more from RSS Feeds Cloud

Subscribe to get the latest posts sent to your email.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from RSS Feeds Cloud

Subscribe now to keep reading and get access to the full archive.

Continue reading