-
- Home
>
-
- Blog
>
-
- InfoSec Trends
Can You Detect Intent Without Identity? Securing AI Agents in the Enterprise
- Sep 11, 2025
- Stephen Moore
- 4 minutes to read
Table of Contents
Autonomous agents are no longer hypothetical. These agentic AI systems are already in enterprise environments, interacting with users, systems, and one another. They initiate actions, collaborate across tasks, and generate outcomes that can directly affect security posture. For defenders, this introduces a disruptive new element to threat detection, investigation, and response (TDIR).
These agents represent a new class of identity: nondeterministic, persistent, and sometimes opaque. They force a reexamination of how we define identity, understand intent, and enforce accountability — especially in environments where human and machine roles increasingly overlap.
Identity: Human, Nonhuman, or Something Else?
The foundational question for any security system is simple: who did it? When behavior looks suspicious, defenders start by identifying whether it came from a user, system, or process. But that logic collapses with agentic AI.
These agents don’t cleanly map to human or nonhuman entities. They inherit roles, reuse static credentials, or operate through ephemeral containers. They may even switch credentials mid-execution, just like adversaries do when hijacking legitimate access. In essence, they’re behaving like a credential switch inside an attack chain — but with no clear attribution path.
This gives rise to a new class: nondeterministic identities. They’re not stateless processes. They maintain context, chain actions, and can even be delegated across environments. Without consistent identity patterns, detection logic falters. Worse, these agents often have no designated owner, no governance wrapper, and no visibility in current security tooling.
Intent and Accountability in Agent Behavior
If you can’t explain behavior simply, you can’t secure it. And with autonomous agents, intent becomes murky. Were they following a user prompt? Did they generate their own subtask? Were they compromised?
Traditional insider threat models depend on understanding human intent. But what about AI? Can AI even have intent? These agents don’t always operate deterministically. A prompt can spawn new chains of behavior that were not explicitly directed. If an agent triggers a sensitive action, the downstream risk may be entirely unaccounted for.
This leads to uncomfortable but necessary questions:
- Who owns the decision if an AI agent deviates from its parameters?
- Who is accountable for outages, misconfigurations, or exposures?
- Which name appears on the breach notification?
Security teams need new models for ownership and accountability. You can’t audit or enforce policy on an actor if you can’t determine its provenance.
The AI Attack Surface: Chained Behavior and Complex Risk
These agents are designed to interoperate and collaborate. One agent may call another to complete a task, and that second agent may branch into yet another sequence. These chains of execution make sense from an automation perspective, but they create investigative nightmares.
Imagine trying to trace back an unauthorized database access that began with a help desk automation, spawned a provisioning agent, and then passed through a cost-optimization tool. Who authorized what? Was anything compromised? Which agent made the final decision?
Compounding the problem, these agents can be exploited. Prompt injections, misconfigurations, or adversarial inputs can cause agents to behave in unsafe or unintended ways. Even without malicious intent, poor prompt design or over-permissive access can cause real-world damage — from resource exhaustion to system outages to data exposure.
And once again, defenders are left asking: who did this?
The Control Plane vs. the Data Plane
To frame this problem structurally, consider the identity control plane and the operational data plane. Agentic AI spans both:
- In the control plane, agents receive permissions, tokens, and role-based access. These are often static, long-lived, and misaligned with least privilege principles.
- In the data plane, agents execute actions: creating, modifying, deleting, and accessing resources. These behaviors may be unlogged, unaudited, or interpreted as legitimate.
This duality is a gift to attackers. Autonomous agents can be compromised, then operate with persistence and privilege, while staying below the detection threshold. The inconsistency in how these identities are managed — user today, service account tomorrow, token the next day — means defenders lack standardized ways to analyze or restrict them.
Lessons From Stateful Threat Detection
The concept of state is critical. At Exabeam, we advanced the session-based model to tie identity to behavior over time. This was the key to solving problems like compromised credentials, lateral movement, and insider threats.
Every legitimate session has a start and end, an identity, an entitlement, and an owner. Agentic AI breaks this structure. These identities may not persist, may not have a defined start or end, and may operate in chains that span systems.
When an action occurs without attribution, you’re blind. When actions lack state continuity, you can’t trace causality. And when identity isn’t bound to a human, accountability breaks down entirely.
The Risk of the Unknown Agent
What happens when you detect high-risk behavior but can’t determine who or what initiated it?
That’s becoming more common.
Teams must now classify agent behaviors:
- Is the agent acting within scope but dangerously?
- Is there no documented owner or approval for its actions?
- Is the agent compromised?
- Is it simply untrained or misconfigured?
- Is this a business process problem disguised as a security issue?
Post-incident, who handles retraining? Where are the feedback loops?
This begins to sound like managing people. But these aren’t people. They’re code. Yet they require onboarding, training, offboarding, permissions hygiene, and continuous evaluation.
Response Actions and Governance
Security leaders must rethink TDIR workflows to include agentic AI:
- Response planning:
- Who investigates an agent-generated incident?
- Is this Security, Engineering, Legal, or AI Governance?
- Governance:
- Do contracts include language about agent behavior, breaches, and indemnification?
- Post-incident:
- What does “lessons learned” mean when the actor is AI?
- How do you remediate and ensure resilience?
The core of incident response is not just containment. It’s recovery to a more resilient state. But you can’t improve what you can’t explain. Without traceability, security teams can’t prevent recurrence.
Final Thoughts: The Defender’s Perspective
Agentic AI is already in your network. It’s running tasks, making decisions, and influencing outcomes. It might even be creating risk without your knowledge.
To secure it, you need visibility into:
- Who the agent is (identity)
- What the agent is allowed to do (entitlement)
- Why the agent did what it did (intent)
- Who is responsible when something goes wrong (accountability)
Your security stack must adapt to accommodate this new actor class. That means:
- Expanding identity modeling
- Applying behavior analytics to agent sessions
- Implementing ephemeral identity controls
- Governing AI training and prompt engineering
Autonomous agents aren’t coming. They’re already here.
If you’re not modeling their behavior and enforcing their guardrails, you’re not ready.
👉 Autonomous agents are already in your environment. Make sure they’re secure.
Talk to our experts about building guardrails for AI agents.
Stephen Moore
Chief Security Strategist | Exabeam | Stephen Moore is a Vice President and the Chief Security Strategist at Exabeam, and the host of The New CISO podcast. Stephen has more than 20 years of experience in information security, intrusion analysis, threat intelligence, security architecture, and web infrastructure design. Before joining Exabeam, Stephen spent seven years at Anthem in various cybersecurity practitioner and senior leadership roles. He played a leading role in identifying, responding to, and remediating their data breach involving a nation-state. Stephen has deep experience working with legal, privacy, and audit staff to improve cybersecurity and demonstrate greater organizational relevance.
More posts by Stephen MooreLearn More About Exabeam
Learn about the Exabeam platform and expand your knowledge of information security with our collection of white papers, podcasts, webinars, and more.
-
Blog
Exabeam Named a Leader for the Sixth Time in the 2025 Gartner® Magic Quadrant™ for Security Information and Event M...
- Show More