At RSAC Conference 2026 — the annual gathering where the security industry's most consequential product and strategy conversations happen — Sumit Dhawan, CEO of Proofpoint, made a statement that cuts directly across the AI governance category. He said AI agents behave like humans and carry the same risk profile. They operate non-deterministically. They can be manipulated through prompt engineering. They require what he called "a purpose-built integrity framework" — an AI behavior safeguard layer — that must be coded into the technology itself rather than applied as a policy or a governance document afterward. This is not a vendor press release or a marketing claim. This is an observation delivered to a security practitioner audience at the industry's most scrutinized stage.
Traditional insider risk programs were built around one core detection mechanism — behavioral deviation. When a human employee's behavior diverges from their established pattern, the system escalates. Access to unusual systems, data exfiltration outside normal hours, communication with unknown external parties. The program works because human behavior is mostly predictable, deviations are detectable, and the human is accountable to a code of conduct. Dhawan's point is that AI agents satisfy none of those preconditions. They have no code of conduct. Their behavior is non-deterministic by design. They can be manipulated into taking unintended actions through inputs that look legitimate. They operate at machine speed across multiple connected systems simultaneously. The insider risk model was built for human actors with predictable behavioral patterns. AI agents are internal actors that can cause the same category of damage but through a fundamentally different mechanism — one the model was never designed to detect.
The security industry is adapting its frameworks to cover AI agents because the threat model requires it. The governance industry has not yet made the equivalent adaptation. Most AI governance programs were built around the assumption that the systems being governed produce outputs that humans then interpret and act on. AI agents take action autonomously, which means the governance framework built around human interpretation of outputs does not reach the layer where agent behavior actually occurs. The governance question centers on what the agent did, why it did it, what systems it accessed, and what happened as a result. That is a behavioral governance problem.
What Makes AI Agent Risk Structurally Different
The non-determinism problem is central. Traditional security controls were designed for Boolean pattern-based logic. An action either matches a known pattern or it does not. AI agents do not operate this way. Their outputs are probabilistic. The same input can produce different outputs at different times depending on context, model state, and the chain of tools and systems the agent is interacting with. This means behavioral baseline approaches — the foundation of insider risk detection — are significantly harder to establish and significantly easier for an adversary to operate below. An agent that has been manipulated through prompt engineering may produce outputs that look normal in isolation while the cumulative pattern of its actions represents a significant deviation that only becomes visible after the damage has occurred.
The accountability gap is equally important. When a human insider causes harm, there is a clear accountability chain. The person made a decision. There is a record of their access. There is a supervisor and a reporting structure. When an AI agent causes harm, the accountability chain is much less clear. Who authorized the agent to access those systems? What credential was it operating under? Who was the named human supervisor responsible for reviewing its behavior? What was the approval scope for the actions it took? In most current enterprise deployments the answers to those questions either do not exist or require significant forensic reconstruction after the fact. Dhawan's point is that this gap must be closed at the technology layer — coded into the system as an integrity framework — rather than addressed through policy documents that do not connect to what the agent actually does.
What a Purpose-Built Integrity Framework Actually Means
Dhawan's specific language is important. He said AI agents require "a technology layer which is an AI behavior safeguard layer." That is a governance architecture description. What he is describing is a layer that sits between an AI agent and the systems it can access, observes the agent's behavior continuously, applies defined integrity constraints, and generates an audit trail from what the agent actually did rather than from what was approved before it was deployed. This is identical in function to what continuous production monitoring delivers in the AI governance context — a system that observes behavior as it happens rather than reviewing documentation after the fact. In the agentic AI context the stakes are higher because agents act autonomously and at speed, which means the gap between what was approved and what actually happened can grow very large in a very short window.
The CISO bifurcation Dhawan named is also analytically useful. He said CISOs are splitting into two camps on AI safeguard implementation — proactive and wait-and-see. The proactive CISOs are building the behavioral governance layer now because they understand that the agent deployment surface is expanding faster than any reactive governance program can track. The wait-and-see CISOs are treating AI agent governance the same way they treated early cloud security — as something that can be addressed after the deployment has already scaled. The history of cloud security suggests that position creates a significant remediation problem when the regulatory or incident pressure arrives.
What Enterprise Teams Should Be Doing Right Now
Before deploying any AI agent into a production environment, three things need to exist. An authorization register — a written document specifying exactly what actions the agent is permitted to take, under what credentials, and who the named human supervisor is. A behavioral baseline — an established record of what the agent's normal output and action patterns look like so deviations can be detected rather than guessed at. And an audit trail mechanism — a technical system that records what the agent actually did, which systems it accessed, and what data it touched, automatically and continuously rather than reconstructed from logs after an incident. If none of those three things exist before an agent is deployed, the governance program has a gap regardless of how complete the pre-deployment approval process was. The moment this gap becomes a problem is when an agent takes an unauthorized action and no one can reconstruct the exact sequence that led to it.
Our Take
Dhawan's framing at RSAC is significant not because Proofpoint is building a governance platform — they are building a security product — but because the Proofpoint CEO is describing a governance requirement in a security context at the industry's most visible annual event. When security leaders at that level start defining AI agent behavioral integrity as a governance problem that requires a dedicated technology layer, it means the security market is arriving at the same conclusion that the governance market has been slow to reach. The two conversations — AI governance and AI security — are converging on the same operational problem. Enterprises need a layer that observes what AI agents actually do in production and enforces behavioral constraints as the agent runs, not after it has already acted. This aligns directly with the NIST AI RMF GOVERN function, which explicitly requires accountability across system components, including agentic behaviors, throughout the full system lifecycle.
What remains unresolved is that the insider risk model for AI agents does not yet have the equivalent of 20 years of enterprise insider risk program development behind it. The behavioral baseline problem for non-deterministic systems is genuinely hard. The credential and identity framework for AI agents is still being built across the identity governance market. If your organization is deploying AI agents without an authorization register, a behavioral baseline, and a continuous audit trail mechanism, the GAIG marketplace is where to evaluate the platforms building that layer. Enterprise teams can compare solutions in the AI Security and AI Monitoring categories that are specifically designed for production agent behavior rather than pre-deployment documentation.