Exabeam Launches Connected AI Security System

Exabeam has launched what it calls the first connected AI security system. It is extending User and Entity Behavioural Analysis (UEBA) to monitor AI agent behaviour. Unifying behavioural analytics, investigation, and posture insights gives security teams a clearer picture of risks across the organisation. It is integrated with Google Gemini Enterprise to provide real-time visibility into agent actions, enabling secure AI adoption and faster response to emerging threats
Steve wilson, chief ai & product officer at exabeam

Steve Wilson, Chief AI and Product Officer at Exabeam, said, “Securing the use of AI and AI agent behavior requires more than brittle guardrails; it requires understanding what normal behavior looks like for agents and having the ability to detect risky deviations.

“Exabeam is the first to apply UEBA to AI agents, and this release further extends that agent behavior analytics leadership. These capabilities give security teams the behavioral insight needed to identify risk early, investigate AI agent activity quickly, and continuously strengthen resilience as AI usage and agents become integral to enterprise workflows.”

Why is this important?

In a word, trust. UEBA is used by security teams to detect suspicious behaviour by users. It tracks what they access, when, where from and what device. It then compares that to what is known about that user and their usual work practices.

For example, if a user makes an access from a previously unknown machine, it raises a flag. That could require a verification of the machine or additional security protocols. The same is true if they connect from an unusual location or in the middle of the night.

Extending it to AI agents makes sense. Users are tasking AI agents to do work for them, such as gathering data, analysing data or even producing reports. That means that the AI agent is acting as the user and may well continue to do that to track new information that occurs overnight.

Additionally, as pointed out by Exabeam, “Enterprises are already seeing AI agents share sensitive data, override internal policies and make unsanctioned changes without visibility into who authorized the action or why it occurred.” It creates security, compliance and privacy issues, all of which could result in significant costs for an organisation.

UEBA for AI agents is now part of a greater solution

To address this, Exabeam introduced UEBA for AI agents in September 2025. This latest update brings that and its existing security tools into a single solution. The company claims that this “unifies AI investigations in one place and strengthens teams’ ability to assess their security posture around AI usage and agent activity.”

Security teams will gain access to greater levels of data and analytics that will allow them to model how AI agents behave. That modelling will allow security teams to establish how users are taking advantage of AI agents and highlight weaknesses in their security models.

Some organisations now treat all AI agents as first-class identities. It means they are able to apply greater levels of security controls and policies directly to AI agents. Others are creating AI agents with limited access to data and systems. These then have to be orchestrated in order to deliver the benefits that users want.

Providing a modelling tool means that security teams can look at these and other ways of better controlling the potential risk from AI agents. It will enable organisations to make better decisions on the type of access and usage they are prepared to allow.

Enterprise Times: What does this mean?

As use of AI agents has increased, organisations are beginning to realise that they need greater visibility over how AI agents operate. Inside too many organisations, AI agents have more power and access than users. They also operate outside the policy controls that are applied to user accounts. That makes them as much of a risk to security as malware.

Unless organisations get a grip of AI agents and how they are used, the risks to the business are serious.

The key element in this announcement is the modelling. It will allow organisations to look at the various options that are available to better control security risks. They can then model the use of AI agents in those scenarios and pick which best suits the business. Importantly, this is not just about a one-size-fits-all approach. Modelling means that different approaches can be deployed across the business as appropriate.

It will be interesting to see how well this is taken up by Exabeam customers and how it gets integrated into their other security and AI stacks. Additionally, what will Exabeam add next to its security model to strengthen how it deals with AI agents?

The post Exabeam Launches Connected AI Security System appeared first on Enterprise Times.

rssfeeds-admin

Share
Published by
rssfeeds-admin

Recent Posts

This Week’s Awesome Tech Stories From Around the Web (Through March 28)

Artificial Intelligence This New Benchmark Could Expose AI’s Biggest WeaknessMark Sullivan | Fast Company “The…

2 hours ago

Beyond Touchscreens: How AI is Revolutionizing Industrial HMIs

For decades, the Human-Machine Interface (HMI) served as little more than a passive window into…

2 hours ago

Beyond Touchscreens: How AI is Revolutionizing Industrial HMIs

For decades, the Human-Machine Interface (HMI) served as little more than a passive window into…

2 hours ago

Free Unlimited Video Face Swap: Solve Content Creation Bottlenecks with Deepfake Maker

You are not alone in case you have found it difficult to make interesting videos…

2 hours ago

Free Unlimited Video Face Swap: Solve Content Creation Bottlenecks with Deepfake Maker

You are not alone in case you have found it difficult to make interesting videos…

2 hours ago

Photos: No Kings protest in Bloomington, Indiana draws large crowd downtown Saturday

BLOOMINGTON, Ind. — Protesters gather around the Monroe County Courthouse during a No Kings demonstration…

2 hours ago

This website uses cookies.