Security Research

Delinea 2026 Identity Security Report: The AI Security Confidence Paradox Putting Enterprises at Risk

Delinea surveyed 2,001 IT decision-makers across seven countries and found that 87% believe their identity security posture is ready for AI-driven automation at scale — while 46% of those same organizations admit their identity governance around AI systems is deficient. This is what that gap actually looks like, and what it costs when it closes the wrong way.

Updated on April 21, 2026
Delinea 2026 Identity Security Report: The AI Security Confidence Paradox Putting Enterprises at Risk

The AI mandate from business leadership is unambiguous: accelerate adoption or fall behind. That pressure is real, and it is reshaping how quickly AI agents, automation workflows, and non-human identities are being deployed inside enterprises. What the Delinea 2026 Identity Security Report makes clear is that the pressure is also reshaping something else — how thoroughly organizations are skipping the identity controls those deployments actually require.

Delinea commissioned Censuswide to survey 2,001 IT decision-makers across seven countries who are actively using or piloting AI in their environments. The findings reveal a structural tension the report calls the AI security confidence paradox: organizations express high confidence in their security readiness for AI while simultaneously admitting they lack the foundational capabilities to back that confidence up. They acknowledge gaps in identity discovery, monitoring, and privilege control. Under constant pressure to loosen identity controls in favor of deployment velocity, risk managers often comply — and the gaps widen.

The identity security problem at the core of this report is not new to enterprise security. Traditional IAM was built for human users and relatively static machine accounts. What changed is the population of identities that now require governance: AI agents that make non-deterministic decisions, request elevated privileges dynamically, and operate across connected systems at machine speed without human approval at every step. Legacy identity tools were never designed for this, and the survey data shows that most organizations have not yet built the infrastructure to replace them.

The report's central finding is uncomfortable in a specific way: most organizations are not failing to understand the risk. They are knowingly trading identity control for operational velocity, accepting standing access as the default because they lack viable alternatives and cannot afford the friction of slowing down. That is not an ignorance problem. It is a tooling and governance infrastructure problem — and the gap between awareness and corrective action is where the next wave of identity-based breaches will originate.

Key Findings

87% say their identity security posture is prepared for AI-driven automation at scale

46% of those same organizations admit their identity governance around AI systems is deficient

2x more likely to give low marks for identity discovery in AI environments vs. legacy systems

  • 82% of organizations report high confidence in their ability to discover non-human identities with access to production systems — but fewer than 1 in 3 validate NHI and AI agent inventory usage or access patterns in real time.

  • 90% of organizations admit to having at least some identity visibility gap. AI-related environments show persistent discovery gaps at 51% — nearly double the rate for legacy and on-premises systems at 27%.

  • 42% of organizations say AI expansion is one of the top factors that increased their NHI risk in the past 12 months, outpacing increased automation and CI/CD velocity (26%) and growth in cloud native workloads (26%).

  • 90% of organizations place pressure on security teams to loosen access controls to support AI-driven automation, with nearly 1 in 5 reporting that pressure as strong. When security requirements conflict with business speed, fewer than 1 in 3 organizations say security requirements are consistently enforced.

  • 53% of organizations regularly encounter unsanctioned AI tools and agents accessing company systems or data. Only 28% can detect shadow AI in real time — most detection takes hours to days.

  • NHIs now outnumber human accounts approximately 82 to 1 in enterprise environments, up from 46 to 1 two years ago. Agentic AI is accelerating this ratio further, and less than a quarter of organizations have documented policies for creating or removing AI identities.

  • 80% of organizations are unable to always understand why an NHI took a privileged action. AI agents, unlike previous automation, are non-deterministic — they make contextual decisions and can request elevated privileges dynamically without explicit scripting.

  • Static, long-lived credentials are the most common access method for NHIs and AI agents, used by 35% of organizations. Only 17% use just-in-time authorization and 8% use ephemeral access. One in 10 organizations do not know how they are granting access to NHIs at all.

  • 73% of respondents agree that standing access for NHIs and AI agents increases risk. 74% say it is necessary to meet uptime expectations. 68% say security teams often accept standing access under operational pressure. All three are true simultaneously.

  • 59% of organizations say they lack viable alternatives to standing access for NHIs and AI agents. The biggest single barrier to reducing standing access is performance or reliability concerns, cited by 25%.

  • 92% of organizations believe AI will amplify identity-related threats over the next several years. Credential stuffing and password attacks (33%) and privileged account compromise (31%) lead their specific concerns.

  • Workflow friction from identity and access controls is high across all categories. Deploying AI agents shows 74% moderate-to-high friction, with 38% rating it high — the second highest category after cloud and infrastructure provisioning at 79%.

The Core Paradox

Organizations are not failing to understand the risk. They are knowingly accepting it. The survey data shows that most organizations know standing access increases risk, cannot explain what their NHIs are doing, and are under constant pressure to loosen controls anyway. That is not an awareness gap — it is a tooling and governance infrastructure gap that the current AI race is making structurally worse.

What the Report Covers

The AI Security Confidence Paradox

The report's opening analysis establishes the central tension: a 52-point gap between the 82% of organizations that say they are very confident in their ability to discover NHIs with access to production systems, and the 30% that actually validate NHI and AI agent activity in real time. This pattern holds across every dimension of identity governance the survey tests. Confidence levels are consistently high. Follow-up questions about the specific mechanisms underlying that confidence consistently reveal limited validation and incomplete oversight. The report identifies this as paradoxical thinking — organizations advancing agentic AI without modernizing the identity controls required to support it, likely because their confidence is built on incomplete information rather than measured capability.

GAIG Context

This pattern mirrors what GAIG's analysis of AI governance platforms has documented: organizations purchasing compliance tooling based on self-reported assessments rather than technical validation. See AI Governance Platforms That Cannot See Your Models Are Selling You Compliance Theater for the governance equivalent of this confidence gap.

The Identity Visibility Gap

Section 3 documents the visibility gap in specific terms. 90% of organizations admit to at least some identity visibility gap. The gap is most acute in AI-related environments, which show persistent discovery problems at 51% — nearly double the legacy on-premises rate. Machine and NHI accounts represent the single largest identity type creating visibility gaps at 40% of organizations, narrowly ahead of general workforce identities at 36%. The report notes this proximity matters: organizations have not finished solving identity visibility for human users before AI is accelerating the NHI problem to a new scale.

The challenge identified is not that organizations are unaware. 42% know AI expansion is driving their NHI risk upward. 38% worry about excessive autonomy or privilege. 35% are concerned about limited auditability and explainability. 32% worry about rapid identity proliferation. The problem is that without visibility, these concerns stay abstract rather than operational. Organizations cannot detect anomalous behavior or investigate suspicious actions in systems they cannot observe. The report's framing is direct: until the visibility gap closes, governance will struggle to progress.

GAIG Context

GAIG's AI Monitoring coverage addresses exactly this signal gap. Visibility without action is data storage, not security. See Your AI Monitoring Dashboard Is Full of Data Nobody Acts On and AI Monitoring Signals Explained for the framework that connects visibility to operational response.

Why Identity Weaknesses in AI Remain Invisible: Three Root Causes

Section 4 identifies three interconnected drivers that make AI identity risk structurally invisible. Each creates its own failure mode and each reinforces the others.

The first is speed prioritized over governance. The operational and competitive risks of slowing AI deployment are outweighing security risks in most decision-makers' calculus. Identity and access controls create friction in every workflow category the survey tests — cloud provisioning, third-party vendor access, CI/CD pipelines, automation workflows, and AI agent deployment all show 65–79% moderate-to-high friction rates. When that friction conflicts with business speed, fewer than 1 in 3 organizations consistently enforce security requirements. Approximately 25% grant exceptions on a case-by-case basis. Another 25% temporarily disable controls or grant standing privileges — and as one expert quoted in the report notes, "temporarily disabled" rarely gets revisited.

"We keep saying we need to build security in, not bolt it on. But then, every new tech paradigm we give a security hall pass. People go out and do a bunch of innovative new greenfield projects, and security has to come in after the fact and try to harden it."

Chris Hughes, Resilient Cyber

The second driver is rampant shadow AI. 53% of organizations regularly encounter unsanctioned AI tools and agents accessing company systems. Only 28% can detect shadow AI in real time. What the report documents as a new dimension of this problem is the evolution of shadow AI use cases: employees are no longer just using AI for productivity tasks but are deploying full-fledged autonomous agents under their own credentials, granting those agents broad access to enterprise systems without understanding the access control implications. The rapid adoption of open-source autonomous agent tools in early 2026 is cited as evidence of this shift — what previously required technical expertise can now be deployed by any employee with copy-paste skills.

Shadow AI Scale Signal

Security researchers tracked a publicly-exposed open-source autonomous agent tool growing from 1,000 to 21,000 exposed instances in a single week in late January 2026. A TrendMicro report found that 1 in 5 organizations had employees deploy this tool without IT approval during the same period. Shadow AI is no longer an edge case — it is operating at scale inside most enterprise environments right now.

The third driver is AI fueling unchecked NHI activity. AI agents differ from previous automation in a fundamental way: they are non-deterministic. They make contextual decisions, request elevated privileges dynamically, and take actions that were not explicitly scripted. This creates a governance problem that legacy tools cannot address: 80% of organizations cannot always explain why an NHI took a privileged action. The report connects this directly to the standing access problem — organizations are granting always-on credentials to agents precisely because the friction of alternative approaches is too high, while simultaneously acknowledging that those credentials increase risk they cannot quantify.

GAIG Context

The AI access control gap documented here maps directly to the Identity & Access Controls layer in GAIG's AI security framework — specifically agent permissions, model/API access control, and third-party access. See AI Security Controls Explained for the full control architecture and where each of these failure modes sits in the enforcement stack.

Identity at the Core of AI's Biggest Threats

Section 5 connects the visibility and governance gaps to the current threat landscape. Delinea Labs' analysis identifies a fundamental shift in attack methodology: identity has become the primary execution layer. Breaches increasingly originate from legitimate access — valid credentials, tokens, OAuth grants, and automation pipelines — rather than exploitation of traditional software vulnerabilities. Attackers are using what already exists. They target NHIs specifically because defenders cannot account for what exists in those environments, much less determine what is over-privileged.

The threat landscape findings are reinforced by the survey data on concern levels: 92% of organizations believe AI will amplify identity-related threats over the coming years. The specific threat categories leading concern — credential stuffing and password attacks at 33%, privileged account compromise at 31% — are precisely the categories that over-permissioned, standing-access NHI credentials enable. Ransomware tactics are shifting to target identity infrastructure, exploiting identity providers and SSO systems. PAM exploitation is common. Over-entitled service principals are widespread.

"In 2026, the core security question is no longer 'Can we stop intrusions?' It is 'Can we continuously validate trust across humans, machines, and agents — at machine speed?'"

Delinea Labs Research, The State of Identity, AI, and Cyber Resilience in 2026

Six Recommendations for Reducing Identity Security Friction

Section 6 closes with recommendations drawn from a panel of independent security experts including Kayla Williams (vCISO, SANS Institute 2024 CISO of the Year), Chris Hughes (Resilient Cyber), and Dr. Gerald Auger (Simply Cyber). The six recommendations are sequenced deliberately, with visibility as the prerequisite for everything else.

Visibility comes first

Every other recommendation depends on knowing what identities exist, what access they hold, and what they are doing. Without visibility into NHI and AI agent activity, organizations cannot detect anomalous behavior, cannot investigate suspicious actions, and cannot build policies grounded in operational reality.

Machine-speed security for machine-speed threats

Human-in-the-loop controls cannot keep pace with AI agent activity at scale. The number of agents and the volume of their actions exceed human review capacity. Automated, real-time enforcement — not post-event alerting — is the required operational model.

Zero standing privilege is the endgame

Static, long-lived credentials for AI agents create persistent risk. Just-in-time and ephemeral access are the target state. For most organizations today, standing access will remain the realistic baseline — but visibility into where standing access has been granted is the minimum viable first step.

Zero-trust principles are more critical than ever

Least permissive access control and microsegmentation limit blast radius when an agent behaves unexpectedly or is compromised. The principles matter regardless of whether the term "zero trust" is in favor — they are the architecture that makes any of the other recommendations scalable.

Encourage experimentation in isolated environments

Sandboxed environments with synthetic or public data can channel early-stage agent experimentation away from production systems while governance frameworks are built. This does not satisfy every use case, but it reduces the surface of uncontrolled exposure during the adoption ramp.

Evolve from least privilege to least permissive autonomy

For AI agents, access control must extend beyond what systems an agent can reach to constrain what decisions it can make independently and what actions it can take without human review. Agents should have the autonomy required to complete their task and nothing beyond that scope.

GAIG Context

The governance infrastructure required to operationalize these recommendations — AI inventory and registry, risk classification, agent permissions, audit and evidence generation — is documented in GAIG's governance capabilities framework. See AI Governance Capabilities Explained: What Platforms Actually Do and How to Choose the Right One for the full capability map.

Our Take

AI Security Take

The Delinea report is one of the most useful data points published on AI identity risk in 2026 because it captures something most vendor reports obscure: organizations are not failing to understand the problem. They are failing to act on it. That distinction matters because it changes the diagnosis. If the problem were awareness, the solution would be education. The problem is tooling and governance infrastructure — legacy IAM built for human-centric, static environments being asked to govern non-deterministic agents that escalate their own privileges at machine speed.

The 87/46 split at the center of the confidence paradox will look familiar to anyone tracking the broader AI security market. High confidence paired with admitted capability deficits appears consistently in survey data across governance, security, and monitoring categories. It reflects a genuine organizational dynamic: the pressure to project security readiness to leadership and board audiences is in direct tension with the operational reality of what existing tools can actually observe and enforce. The Delinea data gives that dynamic a specific number — 52 percentage points between claiming readiness and admitting deficiency — which makes it easier to use in security budget conversations.

The shadow AI section deserves particular attention. The evolution from employees using productivity AI tools to employees deploying autonomous agents under their own credentials represents a qualitative shift in the threat surface. An employee using ChatGPT for drafting is a data leakage risk. An employee deploying an autonomous agent with broad permissions to their enterprise environment under their own account is an identity and access control risk that looks identical to a compromised credential on the network. The quote in the report captures it cleanly: "I can't tell on the network if it's Carl or Carl's AI." That is not a monitoring problem. It is a fundamental identity architecture problem that most organizations have not started to address.

The path forward the report describes — visibility first, machine-speed enforcement, zero standing privilege as the target state, least permissive autonomy as the agent-specific extension of least privilege — aligns with the Agentic IAM framework published by CoSAI in March 2026, which addresses the same structural failure from the standards side. The convergence of a global survey showing the scale of the problem and a cross-industry standards body defining the required architecture creates a clearer picture of where enterprise AI security investment needs to go in the next 12 months than any single source could provide alone.

Related Articles

Tenable Cloud and AI Security Risk Report 2026 Security Research

Mar 2, 2026

Tenable Cloud and AI Security Risk Report 2026

Read More
Work AI Institute Publishes "The New Rules of AI Security" Introducing the AWARE Framework Security Research

Mar 12, 2026

Work AI Institute Publishes "The New Rules of AI Security" Introducing the AWARE Framework

Read More
HackerOne 2026: The AI Security Gap Putting Enterprises at Risk Security Research

Mar 15, 2026

HackerOne 2026: The AI Security Gap Putting Enterprises at Risk

Read More

Stay ahead of Industry Trends with our Newsletter

Get expert insights, regulatory updates, and best practices delivered to your inbox