How do we solve the challenge of Agentic AI?
How do we solve the challenge of Agentic AI? (Image Credit: yasa-design-studio-Ot2wQ8jF43s-unsplash)Agentic AI is being touted as having a major benefit for organisations. How effective it will be depends on where you use it and how you deploy it. For it to be effective, however, it needs access to data from across the organisation. Managing that access is a major security concern, especially given the number of AI solutions out there.

Joe Kim, President & CEO of Druid AI (Image Credit: LinkedIn)
Joe kim, president & ceo of druid ai

To explore how to secure AI agents, Enterprise Times editor Ian Murphy sat down and talked with Joe Kim, President and CEO at Druid AI.

As an investor and experienced CEO, Kim has been around the security industry for over two decades. In October 2025, Druid announced its Virtual Authoring Teams, a new generation of self-building AI agents.

The problem of AI agents and security

Organisations are rolling out AI at a rate that far outstrips any technology deployment. They are doing so in order to keep up with the competition, and because the media and vendors tell them they must.

Unfortunately, the rush to deploy AI in all its forms has resulted in security taking a back seat. AI, to be effective, needs to access data across the organisation. But to access data, it needs rights and permissions, and that’s where vendors are trying to find a solution.

The first generation of agentic AI assumed that a user would instantiate an agent to carry out a task. As part of that process, the agent would inherit all the rights and access permissions of that user. It was felt that this was the only way to ensure the AI could act on that user’s behalf.

That created numerous issues with the scope of access for the AI agent and tracking its behaviour. How would you know if it was the user or the AI accessing the data? Can the existing tools used to track user risk work against an AI agent? Given the speed at which an AI agent works, does it leave time for IT security teams to intervene when it goes wrong?

How do we address this?

That’s the challenge. We can’t undo the technology, so what we need is a better way of creating and securing AI agents. That’s where Kim says Druid AI comes in.

He said, “We are an agentic AI platform that allows folks to be able to create their own agents, but predominantly on the front end servicing the conversational AI part of the market, chatbots and things like that.

“In the background, we have a full agent tech AI platform with orchestration layers. We call it the Druid conductor, which has been around for a long time. It’s an orchestration layer that allows you to orchestrate not just our agents together, but if you have other micro services or agents from other vendors, you can actually orchestrate those things together.”

Drawing on the past to address the future

What is interesting here is the use of established technology, rather than creating something new. Orchestration and micro services are also technologies that IT departments and security teams understand. Extending that approach to the creation, deployment and management of AI agents is interesting.

It also addresses the expectation that users will use more than one agent at a time. Some might be standardised agents with limited capabilities. They would need to work together or collaborate to carry out complex tasks

Kim says that is what Druid believes will happen and how its system has been built from the ground up. But unlike many of the new players in this space, Druid has been working through this for over seven years.

He said, “Our early thought was how do you utilise, almost like the next iteration of micro services?  We’ll end up with a whole bunch of micro agents that actually get orchestrated together to do a bigger kind of job, ultimately. That’s the way that the technology has been built in the background.”

What made microservices work was interoperability. Kim gets that and made it clear that this is not a one-size-fits-all play. Druid wants to make its orchestration platform open to other vendors. That will make it easier for organisations to adopt agentic AI solutions from multiple vendors, but still have a single orchestration engine.

How does this address agentic AI security?

One of the changes in the market from mid-2025 was vendors realising they couldn’t let AI agents be just another non-human identity (NHI). Instead, the market has shifted to the acceptance that AI agents must have unique identities. At the same time, they should be assigned their own rights and permissions.

Doing that ensures that they can be tracked like any human identity. Their behaviours and accesses can be logged and monitored to identify security risks. Kim believes that all agents and sub-agents should be first-class identities and be subject to authorisation checks.

But doing that authorisation brings a challenge. Agents will operate at machine speed, not user speed. Kim said, “When it comes to identity, it needs to be in the platform, because we’re moving so fast. If I’m answering a question for you in real time, I’m going through 55 subroutines.”

With human identities, we have designed systems to scale for access. But the number of agents and sub-agents that are likely to be in use requires a massive scale-up. That means that the platform has to have internal identity mechanisms. It needs to do continuous checking of every agent and micro agent. Should it have access? Should it carry out those operations? What is happening to the data?

For Druid, that means having two approaches. The first is its internal checking for authentication and authorisation. The second is integration into other identity vendors.

Bringing other human controls to agent control

When we monitor humans in the system, we use a wide variety of tools. Continuous checking of privileges and access are the ones most people see.

Another key tool is behavioural analysis. It allows us to look at the times when users log on and what they usually do on the system. When we see a logon from a new computer, or at an unusual time or from an unusual location, security often puts the user through additional security checks.

That same behavioural analysis applies to the data and applications that the user account is accessing. Is it unusual? Have they accessed this before? If so, all is fine. If not, security teams would treat that as a potential indicator of malicious intent or stolen credentials.

But with AI agents, we don’t have a history to know what is normal. We will have to build that over time. We will also need to know who has tasked the agent and whether that is a reasonable task to ask of an agent. That means we will need a greater understanding of what users do and how.

Deep logging is the answer to this

Kim sees deep logging and activity tracing as the answer. He said, “In traditional systems, I have a whole bunch of metrics, or the ability to distribute a tracing. Ultimately, if all else fails, you go to the logs.

“That’s how you’re going to really find things. And the faster things move, the more that the other data points actually become useful. They’re less useful now, and the logs are becoming the most useful things.”

But this raises a challenge for security teams. Current logs from SIEM and other systems are about access and well-defined touch points. With AI, we need a new set of data points to log. Among the new things to log are:

  • The questions being asked of agents/sub-agents
  • The steps/subroutines the platform runs
  • The resources and systems are being accessed to answer the questions being asked

Each of these will generate a lot of new data that will have to be captured and analysed. Given the pressure on security teams, that’s a significant additional workload. What will make it worse is the potential number of agents and sub-agents that users can task.

It raises the question of whether we should restrict usage until we can manage the logging and analysis. That will cause conflict with users who will see any restriction as potential gatekeeping by IT. It means that organisations will need to be very aware of any shadow use of the technology.

Compliance and observability

Many organisations are just putting all their unstructured data into LLMs. It’s seen as a simple solution to allow the LLM to learn from the data that lies outside of databases and other data systems. The advantage of this is that the LLM will create contextual links between data. To make the LLM even more effective, organisations then add all their structured data.

But that creates a solution where compliance is weak. Kim said, “The problem with this approach is that while it is a faster time-to-market, it lacks in security and accuracy. While the LLM is improving, you can’t log anything that’s going on in there. From a compliance perspective, you have no idea how the LLM is making decisions. It’s getting smarter, but it’s hard to log it, secure it, and get compliance related to it.

So how should you do it? For Kim, that’s easy, relatively speaking. The unstructured data goes into the LLM. The structured data, which organisations already trust as a system of record, stays out of the LLM. That can already be accessed through existing microservices.

Kim says the key is to “orchestrate the two capabilities together around these micro services and these micro agents that are now utilising the LLMs by specific sections of what you’re training it to do. When you orchestrate it together, that becomes your new source of truth, or your new systems of record that is both structured and unstructured.”

What is important here is that the second approach provides control over data access, supports logging and versioning. It also makes it possible to meet security and compliance needs around how decisions are made.

How do we design and deploy agents in the real world?

This is the challenge that everyone is facing today. Across the conversation, Kim delivered a number of key takeaways for organisations.

  1. Start with a Business Problem, Not “We Need Agents”: Pick a specific, high-value business use case first. “If you’re gonna start building an agent, ask what a business problem that you’re trying to solve that an agent is going to help you with? Don’t try to force an agent to solve a problem. It has to be some kind of value that a customer needs for a business purpose.”
  2. Start Small and Simple – Then Build Up: Don’t start with a grand vision and try to solve the most difficult problem. “Start with the easiest thing to go after first. Why go after the hardest thing? There’s got to be a set of values that you can build up to.” Use small, limited-scope agents to prove value and learn before expanding them.
  3. Think Like a Microservices Architect: Learn the lessons from SOA and microservices. “Instead of a big bang approach, find a business case and then start small and get value out of it. There’s a value roadmap that you can do. I think that’s the right approach right now.” Treat agent design with the same realism as serious distributed systems engineering.
  4. Human-in-the-Loop Remains Essential: It is an approach that allows you to check what is happening. “You absolutely do need humans involved, as part of that. It’s one of those things, even if it could do the whole thing, how comfortable do you feel about it? There’s orchestration, workflow, automation stuff. That level of comfort is beyond just mechanics, machines, or data. It’s human nature stuff.”

Conclusion

Organisations want to rush to agentic AI due to fear of missing out (FOMO). But as we’ve seen with other technology shifts in the last four decades, rushing in is no guarantee of success. At the moment, vast sums of money are being spent on AI with, arguably, limited return on investment.

Part of that lack of ROI is unquestionably a lack of clear metrics against which it can be measured. That should be sounding alarm bells and telling the C-Suite that its strategy needs to be reset. It also indicates that many of the early approaches were flawed, and there is a requirement for a more logical approach to AI.

As agentic AI becomes the next hill that many will die on, those who survive will be those with a structured approach. Ironically, that approach has been in front of us for many years. SOA and microservices have a checkered past. SOA often failed due to poor execution, not the approach itself.

Microservices were successful, and most things we do today use the API-driven approach that they brought. Success with them is about building an architecture and data model that they can make best use of.

Applying the principles of both to agentic AI and then layering an orchestrated model on top makes a lot of sense. Add to that a new granular approach to security for agents and sub-agents, and suddenly, agentic AI looks more attractive.

The post How do we solve the challenge of Agentic AI? appeared first on Enterprise Times.


Discover more from RSS Feeds Cloud

Subscribe to get the latest posts sent to your email.

Discover more from RSS Feeds Cloud

Subscribe now to keep reading and get access to the full archive.

Continue reading