Cisco Duo’s introduction of an Agentic Identity framework marks an important moment for the enterprise security industry. For years, identity security has been built around a human-centric model: authenticate the user, enforce MFA, monitor sessions. The rise of autonomous AI agents — systems that act independently, hold credentials, and execute tasks without real-time human oversight — exposes the limits of that model. Duo’s move to define an Agentic Identity framework is an acknowledgement that the identity perimeter has fundamentally changed.

What Makes Agentic Identity Different

AI agents are not simply automated scripts. Modern agentic systems can reason, plan, and take multi-step actions across systems — browsing the web, writing and executing code, sending communications, and interacting with APIs. They operate with a degree of autonomy that makes them functionally similar to privileged users, but with none of the behavioural constraints that human users carry.

From an NHI security perspective, AI agents present several distinct challenges. They require persistent credentials to function, but those credentials must be scoped carefully — an agent with excessive permissions can take actions far beyond its intended purpose. Their behaviour is dynamic and context-dependent, making static policy controls difficult to apply. And they can be chained: one AI agent delegating to another, creating complex trust hierarchies that are difficult to audit.

Traditional machine identity controls — certificate management, secrets rotation, service account governance — are necessary but not sufficient for Agentic Identity. The question is not just whether an AI agent has valid credentials, but whether its actions at runtime are consistent with its intended scope.

The Framework Approach

Cisco Duo’s Agentic Identity framework addresses several of these challenges. It introduces the concept of identity verification for AI agents at the point of action — not just at authentication time — enabling continuous validation that an agent is operating within its intended parameters. It also addresses delegation: when an AI agent acts on behalf of a human user, the framework maintains the chain of accountability, ensuring that delegated actions are traceable to the originating identity.

For IAM practitioners, this framework-level thinking is valuable because it provides a conceptual model for governing Agentic Identity before the problem becomes acute. Organisations that are already deploying AI agents in production — in customer service automation, security operations, or IT workflows — need governance controls now, not after an incident.

The Broader Implications

Cisco’s entry into the Agentic Identity space with a dedicated framework signals that this is no longer a niche concern. When a major security vendor formalises its approach to AI agent identity governance, it accelerates adoption across the enterprise market. Security leaders should use this moment to audit their own Agentic Identity posture: which AI agents are running in their environment, what credentials do they hold, and what governance controls are in place? The organisations that answer those questions clearly today will be significantly better prepared for the identity security challenges of the next few years.