A striking finding has emerged from recent research: over two-thirds of workers cannot identify the specific actions being taken by AI agents operating within their organisations. The root cause identified is lax access controls — but the security implications extend well beyond a simple governance failure. This is a signal that enterprise AI deployment has raced ahead of the identity and access management infrastructure required to make it safe.
When humans can’t see what AI agents are doing, accountability breaks down entirely. And when access controls are too loose to constrain agent behaviour, the blast radius of any compromise — or simple error — becomes unacceptable.
Why Visibility Is the Foundation of NHI Security
NHI security begins with discovery and visibility. You cannot govern machine identities you cannot see, and you cannot hold AI agents accountable for actions that aren’t logged and attributed. The finding that most workers lack visibility into AI agent activity reflects a broader failure of the identity governance frameworks that should be providing this visibility as standard.
The access control dimension of this problem is equally critical. If AI agents have been granted broad, standing permissions rather than scoped, task-specific access, the potential for damage — whether through compromise, misconfiguration, or unintended behaviour — is significantly amplified. Lax access controls don’t just create security risk; they make accountability impossible after the fact.
What Security Teams Need to Fix
Agent action logging as a non-negotiable: Every action taken by an AI agent should be logged, attributed to a specific agent identity, and retained for audit purposes. This isn’t optional — it’s the baseline requirement for operating AI agents responsibly in an enterprise environment.
Least-privilege enforcement for machine identities: The research finding on lax access controls points directly to a least-privilege failure. AI agents should receive only the permissions required for their specific function, scoped to the systems and data they need to access. Regular access reviews should be applied to machine identities with the same rigour as human identities.
Human oversight interfaces: Workers need tooling that surfaces what AI agents are doing in terms they can understand and act on. This means dashboards, alerts, and reporting capabilities that translate machine-speed agent activity into human-readable summaries — not raw log data that requires specialist interpretation.
The accountability gap revealed by this research is a governance problem, not a technology problem. The technology to log, attribute, and control AI agent access exists. What’s missing in most organisations is the NHI security framework to apply it consistently — and the urgency to close that gap before it becomes a regulatory or incident-driven forcing function.