AI Security Reports

HackerOne 2026: The AI Security Gap Putting Enterprises at Risk

HackerOne’s latest enterprise survey reveals a widening AI security gap: companies are deploying AI systems faster than they can test or monitor them. The result is a measurable increase in attack rates and remediation costs. The report explores how visibility, continuous testing, and governance frameworks will determine which organizations manage that risk successfully.

Updated on March 15, 2026
HackerOne 2026: The AI Security Gap Putting Enterprises at Risk

DOWNLOAD THE REPORT HERE

HackerOne recently published a report titled "Closing the AI Security Gap: Containing Risk Before It Scales." The study surveyed 303 security leaders from enterprises with more than $250 million in annual revenue across six countries. Its central finding is straightforward: 94% of organizations expanded their AI footprint during the past year, yet only 66% formally test the majority of those systems. HackerOne refers to this 28‑point difference as the AI Security Gap, and the report attempts to quantify what that gap means financially for large organizations.

The underlying cause appears to be visibility rather than testing capability. Security teams often cannot see everything that has been deployed. Product and business teams introduce new models, tools, and integrations quickly, while governance coverage expands more slowly. This visibility problem is especially pronounced at the application and agent layer where AI systems connect to APIs, internal data sources, and external tools. Those connections expand the attack surface significantly. The report notes that prompt injection reports alone increased by 540% in 2025, highlighting how quickly this layer is becoming a primary entry point for attackers.

Financial data in the report shows how this structural gap translates into risk. Organizations that fall inside the gap report an 89% AI‑related attack rate and average annual remediation costs of $1.78 million. By contrast, organizations testing more than 91% of their AI systems report a 74% attack rate and average costs of $1.05 million. The difference amounts to roughly $730,000 per year. Importantly, expanded testing coverage does not significantly lower the cost of an individual incident. Instead, it reduces the probability that incidents occur in the first place. That distinction reframes AI security investment as a probability management challenge rather than a pure incident response expense.

From a market perspective, the report reinforces a trend that many analysts have already observed: enterprises are deploying AI systems faster than they can govern them. The security gap is measurable and appears to widen as AI footprints grow. Every additional system introduced without visibility or testing coverage increases the likelihood of attack exposure. Vendors building AI security testing, monitoring, and governance platforms are therefore positioned at a critical point in the emerging enterprise AI stack.

Key Terms

AI Security Gap

The difference between the pace at which organizations deploy AI systems and the rate at which those systems receive formal security testing and monitoring.

AI/ML Footprint

The full collection of AI models, agents, tools, and integrations operating inside an organization’s production environment.

Shadow AI

AI systems or integrations deployed by teams without formal approval or oversight from central security or governance functions.

Testing Coverage

The percentage of an organization’s AI footprint that undergoes structured security testing or evaluation.

Testing Breadth

The number of distinct testing approaches used to evaluate AI systems across different threat categories.

Continuous Threat Exposure Management

A security approach that continuously discovers deployed systems, validates exploitable risk, prioritizes remediation, and retests environments as they change.

AI Red Teaming

Structured adversarial testing designed to simulate attacks against AI systems and identify exploitable weaknesses.

Agent Oversight and Guardrail Tools

Systems designed to monitor autonomous AI agents and enforce operational constraints that prevent misuse or unintended actions.

Key Findings

  • 94% of organizations expanded their AI or ML footprint during the past year, while only 66% formally test more than 60% of their systems, creating the 28‑point AI Security Gap.

  • Organizations inside the gap report an 89% AI‑related attack rate and $1.78 million in annual remediation costs, compared with 74% and $1.05 million for organizations testing 91% or more of their systems.

  • Expanding from two AI systems to roughly eight to ten systems correlates with 82% more attack types and a 2.4× increase in annual financial impact.

  • Each additional AI system deployed correlates with roughly $300,000 in additional expected annual financial impact as enterprise AI footprints expand.

  • Risk concentrates where AI systems connect to external tools, APIs, and data sources, making the application and agent layer the most exposed area of the enterprise environment.

  • Forty‑five percent of organizations report that they only partially track or informally monitor shadow AI deployments.

  • Application and agent risks carry the highest attack rate at 51%, yet only 39% of organizations test those areas continuously.

  • Prompt injection reports increased 540% in 2025, while continuous testing across this attack surface remains limited.

  • Security testing primarily reduces the likelihood of incidents rather than the cost of each individual breach.

  • Ninety‑eight percent of security leaders plan to increase AI testing methods in the coming year, with monitoring tools representing the top priority.

  • Sixty‑seven percent of well‑resourced security leaders report using all seven testing methods identified in the report, showing that mature programs layer multiple defenses rather than relying on a single testing approach.

What the Report Covers

The AI Security Gap Defined
The executive summary introduces the 28‑point gap between AI adoption and testing coverage and quantifies the financial exposure that gap creates. The report emphasizes that the problem is not simply the cost of incidents but the probability of incidents occurring as AI footprints grow.

Shadow AI and Untracked Integrations
Visibility emerges as the first systemic failure point. Sixty‑five percent of organizations added between one and five AI systems during the past year alone. Many of these deployments appear in production environments before security teams fully inventory them. The result is a shadow layer of AI activity operating outside formal governance processes.

Coverage vs. Breadth
The report distinguishes between two dimensions of security maturity. Coverage measures how much of the AI footprint is tested. Breadth measures how many different testing techniques are used. Organizations with mature programs often report more vulnerabilities, not because they are less secure but because they have stronger detection capabilities and better visibility.

The Gap Effect
The financial analysis compares three tiers of organizations based on testing coverage. As AI footprints expand, exposure increases non‑linearly. The report estimates that each additional AI system adds roughly $300,000 in expected annual financial impact, reinforcing how quickly unmanaged AI deployments compound risk.

What AI Security Testing Includes
Seven testing methods appear most frequently across security programs: AI system monitoring, external AI security testing, agent guardrail tools, model security scanning, AI red teaming, automated adversarial testing, and bug bounty or crowdsourced testing programs. Mature organizations typically combine several of these approaches simultaneously.

Current Testing Is Uneven
The report maps four primary threat categories and their attack rates. Application and agent risks lead at 51%, followed by prompt‑level attacks at 40%, data security risks at 37%, and model integrity risks at 28%. Despite this distribution, testing frequency does not consistently match risk levels. The highest‑risk surfaces are often the least continuously tested.

Closing the Gap
The report concludes with a maturity model built around continuous threat exposure management. The recommended process follows four stages: discover deployed AI systems, validate exploitable vulnerabilities, prioritize the most critical risks, and remediate issues before the next system change introduces new exposure.

Our Take

AI Security Take

The 28‑point gap highlighted in the report reflects a structural challenge rather than a simple maturity problem. AI adoption decisions are typically driven by product teams pursuing new capabilities or efficiency gains. Security programs expand more slowly, leaving organizations with AI systems operating in environments that have not yet been fully tested or inventoried.

Continuous threat exposure management offers a practical direction for addressing the problem. This approach aligns with frameworks such as the NIST AI Risk Management Framework and the EU AI Act, both of which emphasize continuous monitoring, risk documentation, and ongoing reassessment of deployed systems. The report’s seven‑method testing model mirrors that regulatory logic: security must operate as a persistent lifecycle process rather than a single compliance checkpoint at deployment.

The report does leave one important question unresolved: vendor accountability. Many enterprise AI systems depend on third‑party platforms, SaaS integrations, or autonomous agents that interact across organizational boundaries. These connections create testing surfaces that individual enterprises cannot fully control.

The accountability gap is already present in many environments today. As AI adoption accelerates, organizations increasingly rely on specialized vendors that provide testing, monitoring, governance, and compliance infrastructure. Those vendors are becoming a foundational layer of the enterprise AI stack.

Explore the vendors building that infrastructure inside the GAIG Marketplace, where organizations can evaluate AI governance, security, compliance, and monitoring platforms designed to close the enterprise AI security gap.

Related Articles

Stay ahead of Industry Trends with our Newsletter

Get expert insights, regulatory updates, and best practices delivered to your inbox