Sometime in the last eighteen months, the AI governance conversation changed. The old framing — models as tools, users as the decision-makers, outputs as the thing to govern — stopped fitting what was actually happening inside enterprise environments. What replaced it was messier, faster, and significantly harder to control. AI agents arrived. They did not ask for access governance reviews. They did not wait for security teams to develop frameworks. They spun up, connected to internal systems, and started executing business logic at machine speed while most security architectures kept watching for threats that looked like the ones from 2022.
Writers Dor Sarig and Ziv Karliner at Pillar Security published what is the most serious public attempt to frame this problem correctly. Their piece — "Securing the Agentic Workforce" — is not a product pitch dressed as analysis. It is a structural argument about why traditional security architectures fail for autonomous AI agents and what a purpose-built response looks like. GAIG is covering it because the argument holds up, the threat model is accurate, and the governance implications extend well beyond what Pillar is specifically selling.
"The security model for the agentic workforce barely exists. And the agents are already inside."
— Dor Sarig & Ziv Karliner, Pillar Security · via Securing the Agentic Workforce
The piece covers three core structural problems with existing security for AI agents, the scale of adoption driving urgency, and the four-layer architecture Pillar built to respond. This analysis works through all of it — where the argument is strongest, where governance teams need to read between the lines, and what the practical implications are for organizations running agents in production today.
The Agentic Workforce Is Already Inside Your Environment
The Pillar piece opens with a framing that most enterprise security teams have not fully internalized: AI agents are not tools employees use. They are workers that operate alongside employees. That distinction sounds semantic until you think through what it means operationally. A tool waits for a user to invoke it. An agent monitors a CRM pipeline, drafts follow-up emails, queries an internal database, calls an external API, enriches data from that call, and triggers a downstream workflow — all without a human approving each step. That is not a tool pattern. That is an autonomous actor pattern.
The adoption numbers Sarig and Karliner cite are not projections. They are current state. Independent surveys put adoption above 72% of organizations either using or testing AI agents, with 40% running multiple agents in production workflows. Roughly three million agents operate globally today, and enterprises are spinning up thousands more every week. Cisco's Jeetu Patel, speaking at RSA 2026, projected the long-term curve at 100 to 1,000 agents per human — trillions of agents inside the global economy within a timeframe that is not hypothetical.
72% of organizations are already using or actively testing AI agents in production workflows.
40% are running multiple agents simultaneously. The security architecture for governing them lags adoption by a significant margin in most environments.
Source: Pillar Security, citing independent survey data, April 2026
SACR's 2026 research on Unified Agentic Defense Platforms — cited in the Pillar piece — confirms the security gap explicitly: more than half of deployed AI agents run without active monitoring or security controls. That is not a technology availability problem. The monitoring and control technology exists. It is a governance program problem. Organizations deployed agents faster than they built the frameworks to govern them. The result is a workforce with enormous capabilities and almost no accountability structure watching what it does.
The CISO's Guide to AI Pre-Failure Signals: How to Read Your Governance Stack Before Control Breaks
The shadow agent problem compounds this further. Sarig and Karliner describe a pattern that governance teams consistently underestimate: developers spin up agents in notebooks, SaaS platforms hand business users low-code agent builders, coding assistants connect to community MCP servers that never touch enterprise infrastructure. These agents do not route through corporate proxies. They do not register in cloud IAM. Many store credentials in plaintext on the endpoint. A security program that governs only the agents it knows about covers a fraction of the actual attack surface. Shadow AI for individual tools was bad enough. Shadow agents that take autonomous actions across production systems is a different category of exposure entirely.
Three Problems Traditional Security Cannot Solve
The structural argument at the center of the Pillar piece is worth spending time on because it goes beyond "agents are new so existing tools do not work." Sarig and Karliner identify three specific architectural mismatches between legacy security design and agentic AI behavior. Each one is precise and each one has direct implications for how governance teams need to think about their current stack.
Hidden Logic, Zero Visibility
Traditional security inspects what software does — firewalls inspect packets, DLP inspects data in motion, EDR watches process behavior, SIEM correlates events. AI agents introduce a layer none of those controls can see: the reasoning layer. Every agent action starts with an internal chain of thought that decides what to do, which tools to call, what data to access, and in what order. The most dangerous failures originate inside that reasoning. An attacker who hijacks an agent's goal — what the OWASP 2026 Agentic Top 10 classifies as ASI01, Agent Goal Hijack — does not produce obviously malicious behavior at the action layer. The agent reasons its way into harmful behavior, following legitimate-looking logic until a hidden payload triggers. No previous threat model covers this. Securing it means seeing inside the reasoning, continuously and in real time.
Full Speed, No Safety Net
When a human employee behaves suspiciously, the security stack gets multiple chances to intervene. An alert fires. A manager gets notified. Someone revokes access before serious damage happens. When an AI agent goes rogue, no equivalent mechanism exists in most deployments. Nothing pauses the agent mid-execution, routes a pending action to a human for approval, or kills a session based on live behavior. Agents chain tools, call APIs, write to databases, and modify downstream systems at machine speed with none of the natural checkpoints human workflows create. OWASP classifies this as ASI08, Cascading Failures — no individual step looks malicious but the chain is. Static policy applied to a non-deterministic actor is not security. It is hope.
Too Fast, Too Many
The average enterprise already maintains 144 non-human identities for every human employee, according to data cited in the Pillar piece. The average employee navigates around 10 applications per day and needs 9.5 minutes to recover workflow after each context switch. An AI agent executes thousands of tool calls per minute across hundreds of concurrent sessions with no context-switching penalty whatsoever. The volume of autonomous actions inside a production environment becomes astronomical at scale. Human-led security operations reviewing this at human speed is structurally impossible. The only viable response is automated behavioral monitoring with real-time intervention capability — which most security programs have not built.
These three problems are not independent. They interact and amplify each other. Zero visibility into reasoning means the safety net cannot trigger on the signals that actually matter. The speed and scale problem means that by the time something surfaces in a log, the agent has already taken hundreds of subsequent actions. Traditional security was not designed for an actor class that is invisible at the decision layer, unlimited at the execution layer, and operating at a volume that exceeds human review capacity. Building governance on top of it without purpose-built tooling is the definition of the gap GAIG has been documenting across its coverage of real-world AI failures.
What Pillar Actually Built: The Four-Layer Architecture
The second half of the Pillar piece moves from threat framing to architecture. Their platform organizes around four connected layers, each addressing a distinct gap in the traditional security model. The design logic is sound and worth mapping carefully because it reflects a broader architectural principle that applies across vendor evaluation for agentic AI security — not just Pillar specifically.
AI Ecosystem Integrations
Native connections to where agents actually live — code and pipeline environments, SaaS platforms, cloud infrastructure, endpoints. Pillar's argument is that you cannot govern what you cannot reach, and the perimeter of an agentic AI environment extends well beyond what traditional endpoint or network security tools cover. This layer is the foundation every other capability sits on.
AI Posture
Continuous discovery, supply chain analysis, agentic identity management, and AI security posture management. This is where shadow agents surface — the ones developers spun up without security involvement, the ones SaaS platforms provisioned automatically, the ones connected to community MCP servers that never touched enterprise infrastructure. You get a complete map of every AI asset in the environment, including the ones nobody sanctioned.
Risk Detection and Runtime Controls
The operational core. On the detection side: agentic red teaming, attack surface exposure analysis, real-time threat detection, and coding-agent risk assessment running continuously. On the controls side: adaptive guardrails, data leakage protection, AI gateway enforcement, and MCP tool protection — enforcing policy in real time with intervention in under 100ms when agent behavior drifts. This is the layer that closes the "full speed, no safety net" gap.
Governance and Compliance
Policy enforcement, audit reporting, incident response workflows, and framework mapping across leading regulatory and security standards. This layer connects the operational security capabilities to the compliance evidence requirements that governance teams and regulators actually need — which is distinct from the security controls themselves and requires its own architecture to produce correctly.
The critical design principle across all four layers is that they are connected, not bolted together. This matters because the failure pattern in most enterprise AI security deployments is exactly the opposite — separate tools for discovery, separate tools for monitoring, separate tools for compliance documentation, with no live data flowing between them. An agent that gets discovered in Layer 2 has its behavioral baseline feeding Layer 3's detection. A Layer 3 intervention generates the audit trail that Layer 4 uses for compliance evidence. Each layer's output is the next layer's input. That integration is the architectural differentiator, not any individual capability within a layer.
Related:AI Governance Capabilities Explained: What Platforms Actually Do and How to Choose the Right One
What This Means for Governance Teams Right Now
Reading the Pillar framework through a governance lens — rather than a pure security lens — surfaces several implications that the piece does not state directly but that follow clearly from the argument. These are the things governance and compliance teams need to be thinking about as they read this piece and evaluate how it applies to their current posture.
The Identity Problem Is Bigger Than You Think
Pillar's agentic identity management capability in Layer 2 addresses something most governance frameworks have not formally named yet: AI agents need governed identities the same way human employees and service accounts do. An agent that can access your internal Jira, your GitHub repos, your Confluence docs, and your customer database through an MCP server is operating as a privileged account. The question of who authorized that identity, what scope it was granted, when that scope was last reviewed, and what audit trail exists of its actions is identical to the questions you answer for any privileged human user. Most governance programs have not extended their identity governance frameworks to cover agent identities. That gap is exploitable today, not theoretically.
Related:AI Governance Platforms That Cannot See Your Models Are Selling You Compliance Theater
Runtime Controls Are a Governance Requirement, Not Just a Security Feature
The EU AI Act's post-market monitoring requirements under Article 72 require active analysis and response to what monitoring data shows — not just data collection. An agent that executes a harmful action and then gets logged is not a governed agent under that standard. The runtime intervention capability that Pillar describes — stopping or rerouting agent actions mid-execution based on live behavioral signals — is what separates a governance-ready agent deployment from one that can only produce documentation of what went wrong after the fact. Governance teams evaluating agentic AI security platforms need to ask specifically whether runtime intervention capability exists, not just whether the platform monitors agent behavior.
Related Your AI Monitoring Dashboard Is Full of Data Nobody Acts On
The Compliance Evidence Gap Is Agent-Specific
Standard AI compliance documentation covers what AI systems were deployed, what policies applied to them, and what the risk assessments concluded. For agentic AI, that documentation framework is insufficient. Regulators and auditors increasingly need to see session-level evidence: what the agent did in a specific session, what tools it called, what data it accessed, and what human oversight was applied. The audit trail for an agent is fundamentally different from the audit trail for a static model. Layer 4 of Pillar's architecture is specifically about producing that agent-level audit evidence, and it is the layer governance teams most often forget to evaluate when looking at agentic security platforms.
Related:AI Compliance: Certifications, Frameworks, and Laws Explained
What Pillar Does Not Cover Though
The "Pillar" framework is technically comprehensive and the threat model is accurate. There are two dimensions that the piece underemphasizes and that governance teams need to supplement from their own program rather than expecting a security platform to solve.
The first is organizational accountability. Pillar's four layers describe a technology architecture for governing agents. They do not describe the human accountability layer that makes any technology governance program actually function. Who owns the alerts that Pillar's detection layer surfaces? Who has the authority to shut down a production agent mid-session based on behavioral signals? Who reviews the audit trail that Layer 4 generates and with what frequency? The technology can surface everything it needs to. Without named owners, defined response SLAs, and documented escalation paths, those surfaces go unread. The monitoring dashboard problem that GAIG has covered extensively applies directly here — a security platform producing signals that nobody is accountable for acting on is not governance, regardless of how technically sophisticated the signal generation is.
The second is the pre-deployment governance gap. Pillar's architecture is strongest in the production detection and response phase. The governance work that should happen before an agent reaches production — risk classification, scope definition, access justification, third-party vendor assessment for any external tools the agent connects to — is not what this platform is built to handle. Organizations need a governance program that covers the full agent lifecycle, with a platform like Pillar covering the production runtime end and a separate governance capability covering the pre-deployment classification and policy definition end. Treating a security platform as a substitute for a governance program produces exactly the kind of compliance theater that makes audit evidence look clean while the actual risk posture stays ungoverned.
Governance Implications by Function
Different teams inside an enterprise read the Pillar framework differently depending on their function. Here is how the core argument maps to specific governance, security, monitoring, and compliance implications for each stakeholder group.
Function | Core Implication from the Pillar Framework | Immediate Action |
|---|---|---|
CISO | The reasoning layer of AI agents is invisible to every existing security control in your stack. You need visibility at the decision layer, not just the action layer. | Audit every production agent for reasoning chain visibility. If you cannot see why an agent took an action, you cannot govern it. |
Governance Lead | Agent identities need the same governance treatment as privileged human accounts. Most governance programs have not extended their frameworks this far. | Add agent identity registration and scope review to your existing identity governance process before the next agent reaches production. |
Compliance Team | Session-level agent audit trails are a different artifact from the system event logs your current compliance program captures. They are not the same and regulators will eventually distinguish between them. | Define what agent-level audit evidence looks like for your regulatory context and confirm your current platform can produce it. |
Monitoring Team | Monitoring AI agent behavior requires behavioral baselines at the tool invocation and access pattern level — not just output quality metrics. Most monitoring programs are measuring the wrong layer. | Map your current monitoring signal coverage against the Pillar threat model. Identify which of the three structural gaps you have visibility into and which you do not. |
Security Engineering | The shadow agent problem means your actual agent inventory is larger than your registered agent inventory. The gap between those two numbers is your unmonitored attack surface. | Run a discovery scan for agent activity across your environment before assuming your current inventory is complete. |
"The Pillar framework names something governance teams have been circling around without saying directly: AI agents are not tools, they are workers. And workers need governed identities, scoped access, behavioral monitoring, and accountability structures. The security architecture for them needs to match that reality — not adapt the architecture built for deterministic software and hope it holds."
Nathaniel Niyazov
CEO, GetAIGovernance.net
Our Take
AI Security Take
The Pillar Security piece is the best public framing of the agentic security problem that currently exists. The threat model is accurate, the architectural response is coherent, and the three structural problems they identify — invisible reasoning, no safety net, and impossible scale — are exactly the problems that make traditional security architectures insufficient for autonomous AI agents. GAIG is referencing this piece because it advances the conversation in a way that matters for governance and compliance teams, not just security engineers.
The shadow agent problem deserves more attention than it typically gets in security discussions. Every governance program has a registered agent inventory and an actual agent inventory. The gap between them is usually larger than governance teams estimate because agents get spun up through channels that bypass formal registration — developer notebooks, SaaS low-code builders, coding assistant integrations. Pillar's discovery capability addresses this at the technical layer. The governance layer still requires organizations to close the process gap that allows agents to reach production without registration in the first place.
For organizations evaluating how to govern their agentic AI environment, the Pillar framework provides a useful architecture reference even if you are not evaluating Pillar specifically. The four-layer structure — ecosystem integration, posture management, runtime detection and controls, and governance and compliance — represents a reasonable maturity model for agentic AI security programs. Where you sit relative to that model tells you what to build next. The organizations that have all four layers connected and producing data that feeds each other are the ones that will catch agentic failures before they become incidents. The ones with point solutions that do not talk to each other are the ones that will read about their own incident in a post-mortem.
The question to bring into any agentic AI security platform evaluation is straightforward: can you show me what happened inside the agent's reasoning chain during a specific session, and can you show me what your platform did in response when that session deviated from expected behavior? The answers to those two questions tell you whether you are buying a governance program or buying a dashboard.