Blog

The Invisible Identity Crisis: Why Traditional Security Can't See AI Agent Delegation

By Luke Bennett, Global Field CTO
December 3, 2025
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The Problem Hiding in Plain Sight

Imagine this scenario: An analyst at your organization uses an AI agent to access company data. The agent processes the request, retrieves information from a restricted database, and returns the results. Everything appears normal in your security logs. But here's the catch; your security infrastructure saw two completely separate events that it cannot connect.

At the first boundary, your CASB or proxy logs show: "User analyst@corp.com made a POST to api.ai-agent.com." At the second boundary, your database access logs show: "Service Principal App_Service_ID_321 accessed RESTRICTED_DATABASE."

What's missing? The critical link between these two events; the fact that the service principal was acting on behalf of the analyst, delegated through an AI agent.

This blind spot forces security teams into an impossible choice: either over-block access to AI agent APIs entirely, or over-permit and hope nothing goes wrong. Neither approach is sustainable in the era of agentic AI, and the delegation problem is exponentially worse in the AI-era.

The Delegation Visibility Gap: An Industry-Wide Challenge

The user-to-agent delegation problem isn't new, but AI agents have amplified it exponentially. As LLMs become more capable and autonomous, there's growing interest in authenticated delegation, where human users can securely delegate and restrict the permissions and scope of agents while maintaining clear chains of accountability. However, the infrastructure to support this delegation visibility simply doesn't exist in most enterprises today.

When we grant AI agents the ability to act on our behalf, we risk losing visibility and control; an agent with excessive or vague authority can accidentally leak data, trigger unintended actions, or become a vector for lateral movement in systems. Every other day we see news articles about Agentic AI data exposure, novel attacks, AI-related breaches and even nation state sponsored AI-swarms to bypass identity defenses built for the pre-AI era.

The challenge lies not just in the delegation itself, but in the authorization decision: Who decides what the agent can do? Under what context? And critically, how is that decision enforced and audited?

Multiple vendors are recognizing this challenge. Allowing AI agents to use human credentials breaks accountability and weakens security controls while creating compliance risk by blurring who's responsible for specific actions. Each agent must operate under its own scoped, delegated identity to ensure every action is properly verified, governed, and attributed.

Organizations deploying AI agents face critical technical gaps in authentication and secrets management, including an identity and attribution gap where AI agents lack mechanisms to propagate user identity through execution chains. This breaks the link between user requests and downstream service calls, making user-specific permissions and audit trails challenging.

Multi-Hop Agents: When Complexity Becomes Exponential

If single-delegation chains are problematic, multi-hop Agent-to-Agent (A2A) scenarios are a security nightmare. The A2A protocol, recently introduced to enable AI agents to communicate and collaborate across different platforms, creates complex delegation chains that traditional security tools were never designed to monitor.

The distributed nature of multi-agent systems introduces four categories of delegation vulnerabilities: chain-of-trust verification, privilege escalation, audit trail fragmentation, and credential sharing.

Consider a practical example: A user asks their corporate AI assistant to book a business trip. The main agent delegates to a flight booking agent, which uses a service principal to access flight APIs. Simultaneously, a hotel agent delegates to a third-party booking service using OAuth tokens, while an expense agent checks budget constraints against the company's financial database. Each hop in this chain introduces new identities, new authentication mechanisms, and new potential security gaps.

Bearer tokens and static keys don't verify who holds them; once issued, a bearer token can be sent down a chain of agents or used across platforms without ensuring that the recipient is authorized, leading to loss of visibility across agent chains. An enterprise might authorize a primary agent to handle procurement but have no insight into how subtasks are delegated to secondary or tertiary agents.

Each agent and capability becomes a target for spoofing, credential theft, or malicious prompt injection; once agents exchange sensitive data, it becomes difficult to ensure downstream agents honor data sovereignty or retention requirements. Agent-to-agent chains can introduce unpredictable latency and tracing gaps, making root-cause analysis and governance challenging.

Why Traditional Control Planes Fall Short

The uncomfortable truth is that existing identity and access management solutions were built for a different era. They cannot solve the delegation visibility problem because they were never designed to address it.

IAM (Identity and Access Management)

Traditional IAM platforms focus exclusively on human identities and their direct access to resources. They have no concept of delegation chains or the ability to track when a human's request triggers an autonomous agent that, in turn, uses multiple service accounts to access data across different systems.

PAM (Privileged Access Management)

PAM solutions focus on protecting high-privilege accounts, typically in sensitive environments. While valuable, they represent only a subset of the identity landscape. They don't track the vast majority of service accounts, API keys, or AI agent identities that exist across modern enterprises.

IGA (Identity Governance and Administration)

IGA tools provide important compliance and audit capabilities, but they operate on scheduled scans and periodic reviews. They create point-in-time snapshots rather than real-time, streaming visibility into identity behavior and delegation chains.

CASB and Cloud Security Tools

Cloud security tools provide visibility into cloud application usage, but they typically see only the initial user action, not the subsequent delegation chain through agents and service accounts.

The fundamental limitation: none of these control planes were designed to link human identities, AI agent identities, non-human identities (service accounts), and the resources they access into a unified, real-time access chain.

We have identity fragments, and agents can't delegate cleanly across fragments; in one app, Sarah can approve five thousand dollar purchases, in another, fifty thousand dollars, somewhere else, she has no limit at all. There's no global truth, just a series of disconnected authorization decisions baked into code, APIs, and tribal knowledge.

The Stakes: Why This Matters Now

The urgency of solving this problem cannot be overstated. Credential abuse remains the most common vector of data breaches in 2025, accounting for 22%; the global average cost of a data breach was $4.44 Million in 2025.

Eighty percent of breaches involve compromised identities, while one in six breaches in 2025 involved AI-driven attacks. Organizations typically have 40 machine identities for every human account, yet most lack proper visibility into these potential vulnerabilities, and by 2035 it's expected that the number of AI Agents will grow to be over 100x per human account as the application layer collapses with the AI layer.

Data breaches increased by 11% year-over-year, with cyberattacks accounting for nearly 78% of all confirmed data compromises. Supply chain attacks, which target the very delegation chains we're discussing, affected 690 entities and resulted in the compromising of data for over 78 million individuals in just the first half of 2025.

The prediction from Gartner is stark: 33% of enterprise applications will include agentic AI by 2028, up from less than 1% in 2024. Organizations face a 30-40x increase in Agentic AI deployments. Without proper visibility into delegation chains and access paths, this exponential growth represents an exponential increase in identity-related vulnerabilities.

The Solution: A Dynamic Identity System of Record

Solving the delegation visibility problem requires a fundamentally different approach; one that treats identity vulnerability management as a continuous, streaming process rather than a periodic scan. The solution must provide:

  1. Unified Discovery Across All Identity Types: The ability to continuously discover human identities, non-human identities (service accounts, API keys, OAuth tokens), and AI agent identities in a single platform.
  2. Real-Time Access-Chain Mapping: Dynamic linking of identity relationships from the initial human request, through agent delegation, to service account usage, and finally to data access.
  3. Behavioral Context and Analysis: Not just "who accessed what," but "how often," "from where," "at what time," and "does this deviate from normal patterns."
  4. Streaming Architecture: Real-time event ingestion rather than polling-based snapshots, capturing transient activities that traditional tools miss.
  5. Cross-Environment Visibility: Coverage across on-premises infrastructure, cloud environments, and emerging agentic data platforms.

How Anetac Solves Delegation and Multi-Hop Identity Challenges

Anetac is the only Identity Vulnerability Management (IdVM) platform purpose-built to address the delegation visibility gap across all identity types and all environments.

Access-Chain Linkage: The Critical Differentiator

The core innovation of Anetac is its ability to link identities across delegation boundaries. When an analyst uses an AI agent that delegates to a service account to access data, Anetac maintains the complete chain:

analyst@corp.com → AI_Agent → Service_Principal_ID → RESTRICTED_DATABASE

By acting as the single source of truth for identity access-chains, Anetac's IdVM transforms identity security from static enforcement to adaptive, continuous protection, ensuring enterprises can safely adopt and scale both traditional and emerging identity types.

This capability extends to complex multi-hop scenarios. In an A2A workflow where a primary agent delegates to multiple specialized agents, each using different authentication mechanisms and accessing different resources, Anetac maintains the complete graph of relationships, showing:

  • Which human initiated the request;
  • Which primary agent received the delegation;
  • How tasks were subdivided to secondary agents;
  • What service accounts and tokens were used at each hop;
  • Which resources were ultimately accessed; and,
  • Whether any step violated policy or exhibited anomalous behavior.

Enhancing Control-Plane partner solutions

When attackers exploit identity vulnerabilities, speed is critical; Anetac uses time-series analysis to flag risky behaviors and misused privileges across all identity types, surfacing early warning signs and root cause.

The platform enables a multi-step approach to policy enforcement for delegated identities via control-plane (enforcement point) partners:

Part 1: Context Injection (per transaction): Adding custom headers to API calls that propagate user attribution and data target information through delegation chains, enabling policy decisions at each partner’s control-plane boundary based on the full context. For example, the agent can add headers like X-Company-User-ID: analyst@corp.com and X-Agent-Data-Target: RESTRICTED_DB_FINANCE to the final data access call.

Part 2: Risk Scoring and Tagging (per session, longer term): Dynamically adjusting user, NHI or AI Agent risk scores based on delegated activities over time (from Anetac’s access-chains), allowing policies to respond to the overall risk posture rather than just individual, or per transaction, actions. This can include pushing a high-risk score, providing policy to the control-plane, or applying a tag like User.Attribute: "Delegated_AI_Session_Active" to the user's profile.

Security teams, when combining Anetac with an enforcement partner, can create policies that were previously impossible, such as: "IF User is delegating to an AI Agent AND Agent is accessing RESTRICTED data AND User's normal behavior doesn't include this pattern THEN BLOCK and ALERT."

The Only Path Forward

Solving the delegation visibility problem requires a purpose-built solution that treats all identities as part of a unified, dynamic system of record. It requires real-time streaming architecture that captures transient activities and behavioral patterns. It requires graph-based analysis that reveals the complex relationships between humans, agents, service accounts, and data.

Anetac is the only platform that delivers this comprehensive approach. The platform discovers both human and non-human identities, including service accounts, and Agentic AI identities, and maps their access chains to uncover over-privileged accounts and risky behaviors, operating seamlessly across on-premise, cloud and hybrid environments.

As agentic AI transforms from a novelty to a core component of enterprise operations, the organizations that thrive will be those that can see the full picture; not just who accessed what, but the complete chain of delegation, the behavioral context, the policy violations, and the potential blast radius of every identity in their environment.

The question isn't whether you need visibility into AI agent delegation chains.

The question is: can you afford to wait while your blind spots grow exponentially with every new AI Agent deployment?

DOWNLOAD NOWSEE MORE