LW ROUNDTABLE: Part 4, Trust frameworks on trial and the push toward verifiable systems
Security teams weren’t just defending data. They were making split-second calls about what to trust, who to trust — and when that trust could be revoked.
In this final installment of our 2025 Year-End Roundtable, we examine what happens when traditional trust frameworks no longer hold. From credential sprawl and federated identity to session hijacking and vendor access, today’s assumptions are being stress-tested at every layer of the stack.
The leaders featured here don’t offer one-size-fits-all answers. But they do surface hard-won lessons — and offer practical signals for what trustworthy systems might look like in 2026.
Cando Wango, National Security Services Solution Architect, All Covered
Shadow AI and autonomous agents are forcing organizations to assume incidents rather than hope to prevent them. Human judgment remains critical as automated systems take on more decision paths. Third-party exposure and self-attested postures feel less acceptable when auditable proof is required. Resilience shifts from a technical aspiration to a business differentiator. Governance aims at rapid recovery, not just blocking threats. The next era depends on verified readiness, not claims.
Jay Bavisi, Group President, EC-Council
Accountability shifted in 2025 — from policy documents to individual liability. Compliance frameworks can name responsible parties, but they can’t ensure those individuals are ready to act. Most organizations respond with more paperwork, not capability. In 2026, real trust depends on whether named responders can interpret AI-assisted attacks and respond in real time. Governance without human fluency can’t meet rising expectations for verifiable, individual-level competence.
Brian Nichols, Principal, Baker Tilly
AI is turning into the sword and the shield at the same time. Deepfake impersonation will drive at least one major breach, exploiting human trust faster than defenses adapt. Meanwhile, defenders feed sensitive data into AI systems that create their own exposures. Security teams need governance, red-team testing, and clear verification around identity and sensitive data. Treating AI as only a defensive advantage overlooks how quickly attackers weaponize each release.
Thordis Thornsteins, Lead AI Data Scientist, Panaseer
Agentic AI will reopen the Wild West because autonomous systems make decisions without the same oversight as earlier tools. Minor authentication gaps or misconfigurations could spiral. Governance must mature quickly or risks will outpace controls as AI touches more infrastructure. The key is complete visibility into what AI systems access, strong controls that limit unintended actions, and teams that understand how autonomous behavior changes traditional security assumptions.
Laurie Mercer, Senior Director of Solutions Engineering, HackerOne
AI is scaling faster than most security teams can process. But trust doesn’t come from volume — it comes from frameworks. At HackerOne, we’re watching “bionic hackers” emerge: humans augmented by autonomous tools. They catch the noise, we make the judgment calls. Leaders are adopting AI, but the smart ones are doing it with clear oversight, human context, and transparency built in from the start.
Richard Bird, Chief Security Officer, Singulr.ai
I see AI accountability becoming the entry price for doing business. Many organizations formed oversight groups, but few created real ownership or visibility into what AI is doing. Leaders cannot protect what they cannot see. Accountability means knowing who uses AI, what data is touched, how outputs are validated, and who answers when something goes wrong. Compliance narratives do not matter if leaders cannot prove control in real time.
Chris Camacho, COO, Abstract Security
Too many organizations still approach cyber risk with a check-the-box mindset. I’m seeing that start to shift as boardrooms get more engaged and security teams push for tighter telemetry and detection. Frameworks are evolving, but gaps remain — especially when mapping controls to business outcomes. We need better integration across teams, better automation across tools, and clearer accountability around what resilience actually looks like.
Alethe Denis, Senior Security Consultant, Bishop Fox
There’s a lot of panic around deepfakes right now, but I think it’s pulling focus in the wrong direction. I’ve seen teams run fear-based tests that don’t match real-world risk. Deepfakes are a concern, yes — but not the top one. We need to anchor adversarial testing to actual risk profiles and double down on stopping the impersonation tactics that attackers use every day.
Aaron Painter, CEO, Nametag
Zero Trust still depends on recognizing who is asking for access. Deepfakes and voice clones showed that criminals impersonate people before they attack systems. Too many identity controls rely on tradition instead of confident recognition. Strengthening the moment where identity is asserted and recovered becomes the foundation. Recognition forces attackers to beat secure hardware in a user’s pocket. Trust frameworks work when identity keeps up with modern impersonation.
Artur Balabanskyy, Co-Founder & CTO, Tapforce
In 2025, we saw security success hinge on shared clarity. The organizations that held up best were the ones that defined control responsibilities early — especially across third-party ecosystems. Where trust broke down, accountability was often vague. Strengthening transparency and coordination across trust frameworks is now a top priority. This is where resilience and trust intersect — and where many teams still need work.
Anurag Gurtu, CEO, Airrived
In 2026, CISOs begin sunsetting tools, not adding more. We’ve hit peak point-product fatigue, and the old dashboard model has collapsed. AI agents can now read, reason, and act — automating threat detection, investigation, and remediation across silos. SOCs aren’t just using AI, they’re operating with it. This is the year AI moves from co-pilot to co-worker, and trust shifts from tools to autonomous systems.
Sandy Carielli, VP, Principal Analyst, Forrester
Quantum computing advances and post quantum standards are pushing governments to issue migration guidance, but timelines and approved algorithms differ across countries. Organizations face confusion about when traditional public key cryptography becomes unacceptable and which replacements to choose. Regulatory pressure is accelerating investment, yet global alignment remains unclear. Entering 2026, the challenge is coordinating migration across borders before quantum capability outpaces preparation. Clarity feels overdue as firms try to plan responsibly.
Danny Brickman, Principal Software Engineer, CyberArk
AI does not remove identity risk. It amplifies how quickly attackers can move and how widely access can spread. Teams are realizing that identity becomes the primary control for modern environments. Privilege misuse remains the common denominator, whether attacks use AI or traditional tooling. Resilience depends on enforcing least privilege and visibility across human and machine identities. Automation helps, but identity hygiene still decides how much damage an attacker can cause.
Dwayne McDaniel, Senior Developer Advocate, GitGuardian
Trust frameworks only matter when they change daily operations. Platform and IAM teams now own non human identity governance, and secrets must be controlled across CI and production, not only reported in CVE counts. SBOM adoption looks good on paper, yet key questions remain unanswered in incidents. In 2026, accountability lands on teams that translate trust ideas into real telemetry and revocation paths that work in production.
George Prichici, VP of Products, OPSWAT
Attackers are exploiting blind spots created by outdated ideas of what a file is and how trusted integrations behave. New formats and AI interfaces handle sensitive data with little oversight, creating paths that bypass traditional controls. Trust becomes the weak point when third parties and copilots open side channels security teams never anticipated. Strengthening assurance means treating partners, APIs, and supply chains as active attack surfaces, not simple integrations.
Jason Soroko, Senior Fellow, Sectigo
SBOM adoption moved from experiments into real production workflows. Vendors are building generation into CI pipelines rather than treating SBOMs as separate documents. Vulnerability tools are integrating SBOM data directly, which accelerates discovery and resolution. The focus now is streamlining distribution and delivery so downstream consumers can trust what they receive. Software supply chain security depends on SBOMs becoming routine signals inside every build and delivery step, not side files.
John Astorino, COO, Auvik
Shadow AI will move from isolated notebooks to autonomous processes that act across systems. Governance must account for provenance, explainability, and continuous monitoring as agents operate at scale. Auditability and anomaly detection need to be built into every automated workflow, not added later. Autonomous systems force organizations to instrument interactions with policy enforcement and visibility. Compliance becomes about controlling automated behavior as much as securing traditional user activity.
Josh Lefkowitz, CEO, Flashpoint
Automation is pushing attack tempo beyond what teams can realistically keep up with. In 2026, the real risk is agentic AI turning autonomous against soft targets like APIs and identity systems. We absolutely cannot substitute human judgment at the intelligence layer. The only defense that scales is human-led and AI-augmented, used purposefully to stay ahead of an exponential threat curve.
Keith McCammon, Co-Founder, Red Canary
Software supply chain attacks are moving from broad exploitation to precision. Adversaries are embedding themselves inside open-source communities, then striking when trust is highest. The real risk is no longer just vulnerable code, but code that looks legitimate until the moment it pivots. Trust itself becomes an attack surface, and software assurance moves from checkbox to essential control as precision replaces volume in 2026.
Kevin Kirkwood, CISO, Exabeam
AI supply chain attacks may come from agents operating inside legitimate integrations rather than obvious malware. These agents can gather internal data, pivot downstream, and replicate in ways that stay hidden. SBOM mandates could become a baseline for real time anomaly detection when components appear that were never declared. The idea of defensible by design starts with visibility. The threat may already be inside the trusted ecosystem.
Kayla Underkoffler, Director of AI Security and Policy Advocacy, Zenity
AI’s shared responsibility model is overdue for a courtroom test. The EU AI Act outlines duties across the stack, but they remain theoretical. In 2026, companies will need to operationalize accountability across builders, deployers, and agents — well before regulators catch up. The real question will be: when a multi-vendor AI system fails, who actually owns the risk?
Lee Weiner, CEO, TrojAI
Documentation helps, but it isn’t control. SBOMs and AI model cards describe intended behaviors, not actual ones. Behavioral risk lives at run time. In 2026, we’ll see enterprises shift from paper-based trust frameworks to real-time enforcement. That means continuous testing, behavioral monitoring, and operational control planes that validate what AI agents actually do — not just what vendors say they should do.
Keith Kuchler, Chief Product & Technology Officer, Sumo Logic
We’re finally seeing real momentum around securing software delivery pipelines — but the tooling is still catching up. From my perspective, the real gains come when we combine complementary strategies and shift left across the stack. No single innovation solves everything. But when secure development practices are aligned end to end, we actually start reducing risk in meaningful ways. It’s integration and orchestration — not silver bullets — that will make trust frameworks real.
Lyal Saayman, Product Manager, Zenarmor
Industry initiatives improved transparency, but architectural opacity limits trust. You cannot build strong frameworks when inspection and enforcement occur across unknown hops. Verification must replace attestation. Security depends on seeing decision paths end to end and eliminating blind spots that appear when users disable old tools. Trust grows when architectures are transparent and source enforced. In 2026, trust frameworks will judge infrastructure, not vendor declarations.
Mayank Kumar, Founding AI Engineer, DeepTempo
In 2026, trust will be weaponized. Attackers will act through clean infrastructure, trusted accounts, and normal APIs — no anomalies, no alerts. Detection must evolve from asking what happened to asking why. If a known entity begins acting toward malicious goals, that intent is the new signal. Rules and indicators can’t keep up. Only models that track attacker logic across sequences will give defenders a fighting chance.
Ryan McCurdy, VP, Liquibase
SBOMs and supply chain guidelines improved transparency, yet the most disruptive failures came from internal changes that slipped past review. Cloudflare and AWS outages began with single permissions and configuration updates. Trust frameworks describe software at a moment in time, but they struggle to capture how it evolves. We need transparency across the entire change path. Closing the gap between runtime trust and change-layer trust is where progress matters.
Srinivasa Addepalli, CTO, Aryaka
AI firewall controls are moving into the SASE data plane where enforcement is consistent across cloud, edge, and zero trust networks. Enterprise applications will ship with embedded copilots that touch sensitive data through SaaS and internal LLMs. SASE will sit between users and these copilots, turning enforcement into a built in control instead of a bolt on. Semantic DLP becomes necessary when AI understands meaning and traditional patterns stop being enough.
Tim Callan, Chief Experience Officer, Sectigo
Software trust breaks when attackers compromise machine identities and code signing. Adversaries know that modern environments rely on automated trust decisions, so they target certificates and private keys instead of perimeter controls. Organizations need continuous validation of code provenance and lifecycle management for machine identities. Digital trust depends on strong issuance, rotation, and revocation. Supply chain security becomes about proving that every artifact is legitimate before it executes in production.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: This feature was assembled with the assistance of ChatGPT, using human-led editorial judgment to shape, refine, and voice-check each entry.)
The post LW ROUNDTABLE: Part 4, Trust frameworks on trial and the push toward verifiable systems first appeared on The Last Watchdog.
LiveU announced the first large-scale deployment of its AI-driven LiveU IQ (LIQ) technology at a…
As media organizations face mounting pressure to produce more content, faster, while maximizing value and…
Whether you love them, hate them, or just get enraged waiting in the queue to…
OpenAI has officially introduced Codex Security, an advanced application security agent designed to automate vulnerability…
When Elon Musk burst onto the scene in his little Tesla Roadster, it seemed a…
Socket’s Threat Research Team has discovered a malicious Google Chrome extension named “lmΤoken Chromophore” that…
This website uses cookies.