Enterprise AI agents are moving from experimental pilots to production systems, with 85% of organizations running agent pilots while only 5% have reached full deployment, according to Cisco President Jeetu Patel at RSAC 2026. The 80-point gap between testing and production reflects fundamental security and governance challenges that enterprises must solve before trusting autonomous systems with critical operations.
The shift represents a maturation beyond simple robotic process automation. Modern AI agents can plan multi-step tasks, use external tools, maintain memory across sessions, and coordinate with other agents — capabilities that create entirely new attack surfaces and identity management challenges.
Security Surface Expands Beyond Traditional Models
Traditional AI security focused on prompt attacks and model responses. Agentic systems expose four distinct attack vectors: prompt inputs, tool execution, memory storage, and agent-to-agent communication, according to Towards Data Science analysis.
Gravitee’s 2026 State of AI Agent Security report found that 88% of organizations reported confirmed or suspected AI agent security incidents in the past year. Only 14.4% of agentic systems went live with full security and IT approval.
The identity governance problem is particularly acute. VentureBeat reported that medical transcription agents updating electronic health records and computer vision agents running manufacturing quality control generate non-human identities that most enterprises cannot properly inventory, scope, or revoke at machine speed.
Investment Flows Into Autonomous Security Applications
Offensive security firm XBOW raised $35 million in Series C extension funding, bringing total funding to over $270 million at a $1+ billion valuation. The company’s platform uses AI reasoning and adversarial workflows to autonomously test applications for vulnerabilities, executing targeted attacks without human intervention.
“Each XBOW agent operates like an extension of our in-house red team, allowing us to scale offensive testing with speed and depth that was previously out of reach,” said Alex Krongold, director of Corporate Development & Ventures at SentinelOne.
The funding came from Accenture Ventures, DNX Ventures, Liberty Global Tech Ventures, NVentures, Samsung Ventures, and SentinelOne S Ventures, indicating enterprise appetite for autonomous security tools that can operate at machine speed.
Anthropic Introduces Self-Improving Agent Capabilities
Anthropic unveiled “dreaming” functionality for its Claude Managed Agents platform, allowing AI agents to learn from past sessions and improve performance over time. The capability addresses enterprise demands for self-correcting systems before trusting agents with production workloads.
Early results show significant performance gains:
- Legal AI company Harvey saw 6x increase in task completion rates after implementing dreaming
- Medical document review company Wisedocs cut document review time by 50% using outcomes features
- Netflix now processes logs from hundreds of builds simultaneously using multi-agent orchestration
Anthropic moved two experimental features — outcomes and multi-agent orchestration — from research preview into public beta, making them broadly available to developers. CEO Dario Amodei disclosed that company growth has outpaced internal projections in Q1 2026.
Enterprise Architecture Shifts From Bots to Orchestration
Forbes analysis by Sanjoy Sarkar argues that enterprises must evolve beyond “bot-centric thinking” toward intelligent orchestration. Many organizations face “automation sprawl” — multiple platforms performing similar functions with uneven governance and fragmented visibility.
“The next phase of enterprise transformation will not be defined by more bots. Instead, it will be defined by how intelligently automation is architected, governed and orchestrated across the enterprise,” Sarkar wrote.
The shift requires:
- Centralized governance models across business units
- Unified credential management and monitoring
- Architectural cohesion to prevent platform duplication
- Intelligent orchestration rather than simple task automation
Identity and Access Management Becomes Critical Bottleneck
IANS Research found that most businesses lack role-based access control mature enough for current human identities, and agents will make governance significantly harder. The 2026 IBM X-Force Threat Intelligence Index reported a 44% increase in attacks exploiting public-facing applications, driven by missing authentication controls and AI-enabled vulnerability discovery.
Cisco’s Michael Dickman outlined a trust framework requiring:
- Agent identity verification before system access
- Continuous monitoring of agent behavior and scope
- Automated revocation capabilities for compromised agents
- Audit trails for regulatory compliance
Apono’s 2026 report found that 98% of cybersecurity leaders report friction between accelerating agentic AI adoption and meeting security requirements, resulting in slowed or constrained deployments.
What This Means
The 80-point gap between AI agent pilots and production deployments reflects a maturation challenge, not a technology limitation. Enterprises have proven agent capabilities work in controlled environments but struggle with the security, governance, and identity management requirements for production systems.
The emergence of self-improving agents like Anthropic’s dreaming capability and autonomous security tools like XBOW suggests the technology is advancing faster than enterprise security frameworks can adapt. Organizations that solve the identity governance problem first will gain significant competitive advantages in deploying agentic workflows.
The shift from bot-centric automation to intelligent orchestration represents a fundamental architectural evolution. Success will depend less on the number of agents deployed and more on how intelligently they’re governed, secured, and coordinated across enterprise systems.
FAQ
What’s the main barrier preventing AI agents from reaching production?
Identity governance and security frameworks. Most enterprises lack mature role-based access control for human identities, and AI agents create non-human identities that are even harder to manage, monitor, and revoke at machine speed.
How do AI agent security risks differ from traditional AI security?
Traditional AI security focused on prompt attacks and model responses. Agents expose four attack surfaces: prompt inputs, tool execution, memory storage, and agent-to-agent communication. They can plan, execute actions, and maintain persistent state across sessions.
What results are early adopters seeing with production AI agents?
Significant performance improvements: Harvey saw 6x higher task completion rates, Wisedocs cut document review time by 50%, and Netflix processes hundreds of build logs simultaneously. However, 88% of organizations also reported security incidents in the past year.
Related news
- Running AI agents to automate outreach at scale – HuggingFace Blog
Sources
- AI agents are running hospital records and factory inspections. Enterprise IAM was never built for them. – VentureBeat
- The AI Agent Security Surface: What Gets Exposed When You Add Tools and Memory – Towards Data Science
- Anthropic introduces “dreaming,” a system that lets AI agents learn from their own mistakes – VentureBeat






