
We’ve spent years studying how identity is managed in organizations, and one question consistently catches executives and their teams off guard – “how many identities exist in your environment?” It’s tempting to answer this with a simple headcount, but that doesn’t factor in the hundreds or thousands of service accounts that have access to systems, applications, and datasets – some of them active, and some of them dormant long after they’ve served their purpose in a given project or function.
These machine identities don’t fit within traditional onboard/offboard processes like us humans do. For one, they operate quietly in the background, invisible to those who aren’t actively looking for them. They also tend to be provisioned automatically, and their access persists long after their original purpose has been forgotten. Nobody thinks to “offboard” a machine agent or service account, revoking access in the same way we would a human following a promotion or an exit from the company. So, over time, access accumulates, visibility declines, and risk steadily builds in places most organizations aren’t even thinking about looking.
How we got here: automation without accountability
NHIs didn’t suddenly become a problem or appear as a new “category” that must be managed. They emerged gradually as a natural byproduct of automation, AI integration, and the need for business to move faster. In many cases, the systems and processes designed to manage human users were simply extended to accommodate them. That worked for a time, but they were never designed to handle the scale or behavior of machine-driven activity at the scale we’re seeing today.
In earlier stages of NHI proliferation, some organizations resorted to treating machines as if they were people just to make the system work. It wasn’t uncommon for teams to create placeholder “employees” in HR systems purely to trigger account provisioning workflows, then use those credentials to orchestrate automated processes. How many employee records in your organization’s HR system have recent dates of birth? Workarounds like this really highlight the crux of the matter – identity governance frameworks were built with humans in mind, then stretched to fit something fundamentally different. Businesses stretch their capabilities and push boundaries all the time, but where security is concerned, there is always a price to pay, and the debt is finally due.
These shortcuts and extensions have now become embedded in how businesses operate. New services, integrations, and automation layers have been added, each introducing more identities with access to critical systems. But unlike human users, these identities are not revisited, revalidated, or retired in a structured way. So, what began as a practical solution has evolved into a systemic crisis, where automation has left accountability in the dust.
The “Chain of Custody” problem
What’s sorely lacking is a clear security baseline where every NHI has a human owner. Not a shared mailbox or a whole team, but a named individual who is accountable for how that identity is created, used, and maintained. Without that chain of custody, there is no meaningful way to enforce responsibility or trace activity back to a decision point. And in environments where access equals action, that lack of accountability directly translates into risk. According to the Non-Human Identity Management Group, a shocking 97% of NHIs have “excessive” privileges that broaden the attack surface. Around 9 in 10 businesses frequently expose their NHIs to third parties, and 44% of tokens are exposed in the wild – being sent or stored over platforms like Teams, in Jira tickets, or on Confluence pages.
They are created programmatically, inherited through integrations, or deployed as part of automation workflows with very little in the way of oversight. In some cases, they’re even capable of triggering other identities or actions, further distancing them from any semblance of human control. What’s left is an environment where activity is taking place inside critical systems, but no one can confidently say what’s going on or who’s responsible for it.
In the case of AI agents – bots that extend beyond automation and into active decision-making – the problem intensifies. There is growing discussion around whether machines can or should act totally independently, but from a governance perspective, the answer has to be a resounding “no.” That’s not to say AI agents can’t be used to their fullest potential, but the checks and balances must always come back to a human at some point in the chain. If that chain is broken, or doesn’t exist to begin with, organizations lose the ability to audit, enforce policy, or demonstrate control, which is exactly what regulators and security teams are increasingly demanding, particularly under frameworks like the EU’s Digital Operational Resilience Act (DORA), which emphasizes traceability, accountability, and strict control over access to critical systems in finance.
From human resources to “non” human resources
NHIs significantly outnumber employees, therefore it stands to reason they must be managed with the same level of structure and discipline. Business evolved to include human resources functions because managing people at scale required oversight, governance, and accountability. The same logic now applies to machines. What’s missing is a formalized way to manage this growing population as a workforce in its own right.
This is where the concept of “non-human resources” starts to take shape. Organizations need a defined function responsible for tracking every non-human identity, understanding what it is, what it does, and where it operates. They need to build a clear inventory, apply consistent taxonomies, and maintain visibility into how these identities interact with systems over time, because the cost of not doing that is now growing by the day.
None of the above steps work unless ownership is established and enforced and NHI lifecycles are actively managed. Every identity must have a defined purpose, a named human owner, and a clear point at which its access is reviewed or revoked. That includes identities created as part of automated workflows, integrations, or AI-driven processes. Basically, if they can access something, they need to be governed like any other participant in the environment. If they cannot access something, then the existence must be justified or removed entirely.
Accountability is the foundation of AI trust
We all know that AI is booming. But in the race to capitalize on AI, machine-based identities are being spun up in their droves. Agentic systems can initiate actions, trigger workflows, and interact with other services with minimal – if any – human input. That means decisions are no longer just executed by machines, but increasingly influenced by them, and the link between action and responsibility is eroding. Policies are beginning to reflect the need for human-in-the-loop controls, particularly when AI systems have access to sensitive data or critical infrastructure. The principle is sound: if an action has consequences, there must be a human ultimately accountable for it.
This is why chain of custody must be inseparable from trust in AI-driven environments. No human owner? No trust. A chain of custody provides a clear line from action back to ownership, enabling organizations to audit behavior, enforce policy, and demonstrate control when it matters. Without that thread of responsibility, the most advanced AI systems – even if they’re effective at their task – will only ever lead to uncertainty and risk. If an organization cannot clearly answer who owns a given identity, whether human or machine, it cannot confidently say it is in control of its own environment. And, regrettably, that is the situation businesses find themselves in today.
If AI is going to act on behalf of your business, it cannot exist without accountability. Without a chain of custody, you’re not scaling innovation – you’re scaling risk.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
