68% of Orgs Can't Tell AI Agents from Humans — Here's Why That's a Problem
A survey from the Cloud Security Alliance, commissioned by Aembit, reveals a troubling gap in enterprise AI adoption: while 73% of organizations expect AI agents to become vital within the next year, 68% cannot accurately identify AI agent activity compared to human activity.
This identity crisis is not merely an operational inconvenience. It is a security and compliance vulnerability that grows more severe as AI agents take on more consequential tasks.
The Scope of Agent Adoption
AI agents are already embedded across enterprise workflows. According to the survey:
- 67% use task automation agents
- 52% use research agents
- 50% use developer-assist agents
- 50% use security or monitoring agents
Most deployments have moved beyond isolated test settings: 85% of organizations report using AI agents in production environments. These agents are not experiments — they are operational tools with access to real data and real systems.
The Identity Gray Area
How do organizations manage agent identity? The answers are concerning:
- 52% use workload identities
- 43% rely on shared service accounts
- 31% allow agents to operate under human user identities
Without a defined taxonomy for agent identity, organizations struggle to answer basic questions: Which actions were taken by humans? Which were taken by agents? Which agent took a specific action? Under whose authority did the agent act?
The Permission Problem
The identity crisis compounds with excessive permissions. The survey found:
- 74% of respondents said agents often receive more access than necessary
- 79% believe agents create new access pathways that are difficult to monitor
- 92% of security leaders are concerned about the use of AI agents across the workforce
"AI agents are inheriting human permissions, operating under shared accounts, and expanding the attack surface in ways that existing IAM tools weren't designed to handle," observed David Goldschlag, co-founder and CEO at Aembit.
The Incident Reality
These are not theoretical concerns. According to the survey, 88% of organizations have already experienced security incidents related to AI agent deployment. The Gravitee State of AI Agent Security 2026 report adds further detail:
- 32% have zero visibility into agent actions
- 36% are blind to machine-to-machine AI traffic entirely
- 23% report shadow deployments that IT doesn't know about
- Only 14.4% of agents went live with full security and IT approval
What Must Change
The research points to several imperatives for organizations deploying AI agents:
- Distinct agent identities. Agents should have their own identities, separate from human users and shared service accounts. This enables proper attribution and access control.
- Least-privilege access. Agent permissions should be scoped to the minimum required for their intended function — and reviewed regularly as agent capabilities evolve.
- Comprehensive logging. Every agent action should be logged with sufficient detail to reconstruct what happened, when, and under whose authority.
- Visibility and monitoring. Security teams need tools that can distinguish agent traffic from human traffic and surface anomalous agent behavior.
The Governance Imperative
The identity crisis in AI agent management is, at its core, a governance crisis. Organizations have deployed powerful autonomous tools without the infrastructure to understand what those tools are doing. Closing the gap requires investment in identity, access control, monitoring, and policy — the same fundamentals that underpin security and compliance for human users.
The difference is urgency. Shadow AI compounds risk faster than shadow IT ever did. The time to address the agent identity crisis is now.