
Nudge Security has announced what it calls “a significant expansion of its platform.” The new capabilities are designed to help organisations mitigate AI security risk. The announcement comes as more and more companies and employees embrace AI. This is about improving compliance and governance of AI, irrespective of where and how it is used.

Jaime Blasco, CTO and co-founder of Nudge Security, said, “The risk isn’t just in the AI tool itself – it’s in the access pathways employees create without considering the security implications.
“A single OAuth grant can give an AI vendor continuous access to your organization’s most sensitive data. Nudge Security makes these integrations visible and manageable for the first time.”
What is Nudge Security addressing?
The focus here is on putting control back in the hands of the IT and security teams. It allows them to uncover and track all AI tools that are in use across the organisation. Importantly, this is not just about generative AI tools. It is about managing all the AI tools and AI-enabled applications that are in use.
Last week, Russell Spitler, CEO and Co-founder of Nudge Security, published a blog looking at the impact of AI on organisations. It contains a graph based on data from the use of Nudge Security tools by customers. That graph charts the growth of unique AI tools, not instances, of AI solutions and AI-enabled applications inside customer environments.
In July 2023, it detected 75 generative AI tools inside customers’ environments. By December 2025, that number reached 1,579. At the individual organisation level, the average is 39 tools per enterprise. That is tools, not instances. Inside organisations, there are multiple instances of those tools. More importantly, many of those tools, such as Otter and Descript, are general tools used by large numbers of people.
It is impossible for IT, security and compliance teams to manage that level of growth. Discovery, management, policy and compliance are the primary challenges that teams have to address. The company has addressed Discovery with a free Shadow AI inventory tool. Addressing the other challenges is the main focus of the new capabilities.
What are the new features?
This is all about tightening governance and enforcing policy. There are six new capabilities that Nudge Security has added. They are:
- AI Conversation Monitoring: Detect sensitive data shared via file uploads and conversations with AI chatbots including ChatGPT, Gemini, Microsoft Copilot, and Perplexity.
- Policy Enforcement via the Browser: Delivery of guardrails to employees as they interact with AI tools to educate and enforce the organization’s acceptable use policy
- AI Usage Monitoring: See trends of Daily Active Users (DAUs) by department, individual user, and specific AI tools (approved or unsanctioned) to quickly respond to business needs and potential risks.
- Risky Integration Detection: Automated surfacing of data-sharing integrations and OAuth/API grants that provide AI tools access to sensitive corporate data
- Data Training Policy Summaries: Condensed summaries of AI and SaaS vendors’ data training policies that surface how each vendor uses, retains, and handles data
- Playbooks to Scale Ongoing Governance: Automated workflows simplify tracking Acceptable Use Policy (AUP) acknowledgements, revoking risky data-sharing permissions, orchestrating account removals, and more.
Enterprise Times: What does this mean?
AI was always going to bring a significant challenge to organisations, but it is one that the industry has seen before. The PC growth in the 80s saw departments buy their own computers and software without IT approval. In the 90s, it was about internet-connected applications and the use of communications.
By the 2000s, it was the emergence of mobile computing and accessing systems from home. Alongside this, there was the dotcom explosion. SaaS and the Cloud were the story of the 2010s and continue today. However, none of those saw the level of adoption and market penetration of generative AI and AI-enabled applications.
The new features here from Nudge Security are meant to address that explosion in AI. They are aimed at giving control back to compliance and security teams. The question is, will customers adopt them fast enough to wrest control back from AI?
The post Nudge Security unveils AI security governance platform appeared first on Enterprise Times.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
