
Related: Feds pull back on collaboration
Microsoft’s Vasu Jakkal set the scale on day one. She noted that IDC projects 1.3 billion AI agents in operation by 2028 — each one requiring the same governance and protection organizations currently apply to human users. That number puts a concrete frame around both waves: the tools needed to defend AI-native infrastructure, and the tools needed to secure AI systems themselves. Neither problem is theoretical anymore.

The message landed differently in this room than it might have elsewhere: the challenge in front of this industry has grown past what any single organization, or any single technology, solves alone. What’s required now is the kind of collective will that Ardern built in the aftermath of Christchurch — clear values, shared purpose, leaders who show up.
The tools and practices to respond are further along than the headlines suggest. The cybersecurity industry has always been fast to adapt. What’s different this time is that adaptation can’t happen company by company, SOC by SOC. It has to be built across organizations, disciplines, and technologies simultaneously — and that work is already underway. The tools and practices required to do it look nothing like what worked five years ago. The practitioners on the following pages are working the problem from the inside — each one a piece of what a coordinated response looks like.

Anscombe has spent years pushing a reframe the industry resists: a cyberattack is a business disruption event, not a technical incident, and the tools for managing it should be measured against financial exposure, not threat intelligence. The Jaguar Land Rover ransomware attack makes the case concretely — five weeks of factory shutdown, 5,000 supplier businesses paralyzed, a £1.5 billion UK government bailout. Supply chain risk and business risk are the same risk. He also flagged PromptLock, an NYU academic proof-of-concept for AI-powered ransomware that found its way into the wild. His warning: adversaries are reading the research papers too.
Kevin Surace, CEO, TokenCore
The industry drove attackers to the front door and left it unlocked. That was Surace’s blunt assessment heading into RSAC — and the Tycoon2FA kit validated it: 96,000 successful break-ins before Microsoft dismantled the tool, every one bypassing a legitimate authentication app. When Salesforce and Microsoft mandated MFA, they inadvertently handed attackers a map. TokenCore’s answer is fingerprint-based hardware authentication where biometrics never leave the device, access is proximity-bound, and there is nothing to phish, replay, or socially engineer. Gartner projects the biometric assured identity market at $16 billion within seven years. Surace calls that conservative.
Dwayne McDaniel, Developer Advocate, GitGuardian
GitGuardian’s 2026 State of Secrets Sprawl report delivered the week’s most arresting number: 64 percent of secrets that leaked in 2022 are still valid and exploitable today. The industry has a detection capability. It does not have a retirement discipline. McDaniel’s deeper point is structural — standing privilege is the root flaw. Any entity holding a credential inherits whatever that credential was authorized to do, permanently, until someone actively revokes it. Nobody does. AI-accelerated development is compounding the exposure: commits co-authored by Claude Code are twice as likely to contain leaked secrets.
Pranava Adduri, Co-Founder & CTO, Bedrock Data
The problem most security teams haven’t named yet: AI’s entire purpose is to find, organize, and synthesize data — including the data enterprises accumulated over a decade and largely forgot about. Adduri’s argument is that loose permissions compounded over years now give AI agents access to things they were never meant to touch. Heading into RSAC, Bedrock Data expanded ArgusAI — its AI governance layer — to map the full exposure chain: the agents deployed, the MCP servers brokering their access, and the sensitive data those systems can retrieve and act upon. The Metadata Lake underneath it all is the foundational layer safe AI governance has to sit on.
Amit Sinha, CEO, DigiCert
The alarmists calling agentic AI an identity crisis are half right — the problem is real, but so is the framework for solving it. AI agents need digital passports: cryptographic, immutable identities that travel with them and can be revoked. The sharper near-term pressure is a mandate most organizations haven’t absorbed. The CA/Browser Forum is shrinking TLS certificate lifetimes from 398 days to 47 — an 8X increase in renewal volume. A bank CSO told Sinha his network already logs three certificate-related outages daily. Without automation, that number becomes one per hour.
Sanjay Castelino, President, Skyhigh Security
The security industry spent years building choke points — known flows, known users, known destinations. AI dissolved all three assumptions at once. Castelino’s frame at RSAC: the browser is now the enterprise edge, and what matters is not where employees are going but what they are doing inside those sessions. Skyhigh arrived with two concrete answers — Next-Generation SSE Hybrid architecture and patent-pending Secure Browser Controls that enforce data policy without forcing organizations to replace their existing browsers. A less-discussed signal: European customers are now demanding on-premise or sovereign cloud deployments, not for compliance, but because geopolitical instability has made cloud-only architecture feel like a single point of failure.
Ted Miracco, CEO, Approov
Every mobile API was built around a single assumption: a human being on the other end. Agentic AI has broken that assumption — and Miracco calls the gap it leaves the Agency Gap. Mobile is the least prepared surface for what follows. API keys are compiled directly into app packages, where they’re extractable through standard monitoring tools. Once an attacker has a valid key, an AI agent can replay authenticated requests at machine speed, cycling through permutations indefinitely. Approov’s answer: move secrets off the device entirely, delivering them just-in-time only to verified, untampered apps.
Jamison Utter, Field CISO, A10 Networks
Utter’s framing cut through the noise: language is now an attack surface. Not SQL injection, not malware — language itself. What makes LLMs powerful also makes them vulnerable to semantic manipulation that no existing tool was built to detect. His four words for the moment: machines fighting machines. A10 built its answer in-house — an AI Firewall using a small language model trained on attack data to inspect prompts inbound and responses outbound in real time, at carrier scale. Most guardrail products failed under production load, Utter noted. This one was built to survive it. General availability: April 7.
Rajiv Pimplaskar, CEO, Dispersive
Few practitioners on the floor were tracking Whisper Leak — and that, Pimplaskar suggested, is exactly the problem. The side-channel attack flagged by Microsoft in late 2025 allows a passive listener to infer the content of TLS-encrypted LLM communications by analyzing packet sizes and timing cadence alone. No decryption required. TLS protects the data; it does not hide the pattern. Dispersive’s answer is to make the pattern disappear — splitting and obfuscating traffic across dynamically shifting paths. A multi-month pilot with American Tower just completed, validating the architecture for AI and GPU workloads at the edge.
Hallgrimur (Halli) Bjornsson, CEO, Varist
Varist’s roots trace to Iceland’s Frisk Software — one of the original antivirus pioneers — which means Bjornsson was thinking about malware at machine scale long before most of this week’s vendors existed. The company nearly deleted its decades-deep malware dataset before he recognized what ChatGPT 3 made possible: a strategic training asset, not a storage liability. At RSAC, Varist launched a free community malware scanner powered by its Hybrid Detection Engine, processing files in 8.5 milliseconds versus the 30-minute sandbox defenders have quietly hated for years. AI-generated, self-mutating malware is now confirmed in the wild.
Yogita Parulekar, CEO, InviGrid
Parulekar put it plainly in a brief floor exchange: writing an AI agent has become easy. Deploying it securely is where organizations fall apart. Developers who can build an agent over a weekend expect production deployment at the same speed — but they’re not security engineers and aren’t slowing down to become ones. InviGrid’s platform closes that gap automatically: securing connections, enabling encryption and logging, enforcing least privilege at the moment of deployment, not after. Her read on where things stand: 2025 was AI agent experimentation. 2026 is when enterprises take them to production and discover what they missed.

Bell’s story is the BYOAI thesis made flesh. A medically retired Army veteran who taught himself AI in his garage, he built a penetration testing integration for PlexTrac, sold it for $100,000, then launched Suzu Labs — now carrying $2.5 million in pipeline across cybersecurity consulting and custom AI deployments. The pitch is precise: enterprises want AI but cannot send proprietary data to OpenAI or Anthropic. Suzu builds localized implementations on open-source models running entirely on client infrastructure. Nothing leaves the building. No outbound API calls. At RSAC, the company swept four Global InfoSec Awards.
Rajeev Raghunarayan, Head of Go-to-Market, Averlon
The remediation gap is not where most security programs are looking for it. Scanners have gotten good at finding vulnerabilities — the failure is everything that happens next: prioritization, context, and fix. Averlon works that second half of the workflow, using AI to determine which findings trace to high-value data and which ones actually need to move. In some deployments, it has cut the critical and high vulnerability workload by 90 to 95 percent. A shift-left capability — intercepting risky code before it commits — entered the market just two months ago.
Noam Issachar, Chief Business Officer, Jazz Security
Jazz Security made the week’s sharpest entrance: walked in with a thesis and walked out with a trophy. Legacy DLP never worked, and AI has made the gap untenable. The startup won the CrowdStrike-AWS-NVIDIA Cybersecurity Startup Accelerator by doing what the old tools couldn’t — understanding not just what data moved, but why, who touched it, and what the intent was. Its agentic investigator, Melody, replaces alert triage with pre-investigated answers. In a world where AI agents reach data across every application layer, context isn’t a nice-to-have. It’s the whole game.
Ambuj Kumar, CEO, Simbian
Simbian arrived at RSAC with two years of momentum behind it and a platform announcement that crystallized what that momentum has been building toward. The unified platform Kumar unveiled brings together three coordinated agents — SOC response, penetration testing, and threat hunting — operating on a shared intelligence layer called the Context Lake, which stores the institutional knowledge security teams usually pass between people. The business case is already in the market: 15x customer growth over the past year. Kumar’s thesis hasn’t shifted — AI agents can outperform L1 and L2 analysts — but at RSAC, the architecture to prove it at scale arrived.
* * *
Forty-four thousand practitioners came to Moscone with an urgent question. They didn’t leave with an answer — but they left with something more useful: proof that the work is already underway, distributed across dozens of organizations, each building a piece of the response the question demands. The infrastructure is arriving.
I’ll keep reporting and keep watching.
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)
The post RSAC 2026: No easy fixes for expanding AI attack surface, but a coordinated response is emerging first appeared on The Last Watchdog.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.


