Governance Research

MIND and CISO ExecNet Research Report: Data Trust Is the Decisive Factor in AI Success

AI adoption has exploded, but trust in the data powering it has not. A major new report reveals 90% of organizations run enterprise GenAI while 70% struggle to enforce policies and just 20% of initiatives hit their targets. Data trust has become the make-or-break factor for AI at scale.

Updated on April 09, 2026
MIND and CISO ExecNet Research Report: Data Trust Is the Decisive Factor in AI Success

AI has crossed the threshold from pilot projects to core operations. The question is no longer whether organizations will adopt it, but whether the data foundation beneath it can actually be trusted.

New research from MIND in partnership with CISO ExecNet exposes a clear picture. While 90% of enterprises have deployed AI at scale—including 90% running enterprise tools like Microsoft Copilot—70% struggle to enforce security policies, and 1 in 5 initiatives fail due to weak data foundations. Nearly two-thirds of CISOs are not confident or only somewhat confident that their controls can prevent unsafe or inappropriate AI data access. These are organizations already operating AI in production.

The report, drawn from a survey of 124 security leaders and 20 in-depth interviews with senior CISOs at large U.S. organizations, reveals consistent patterns. Organizations have rules and policies for AI. They have governance frameworks, acceptable use documents, and AI councils. Data estates carry years of accumulated depth that AI now surfaces immediately. Security frameworks were designed for human actors. Business leaders drive adoption at a speed security can match when foundations are strong.

Data trust—defined as the degree of confidence that systems, including AI, use data safely and appropriately—emerges as the central factor. When data trust is high, organizations use data freely to power outcomes. They experiment more broadly, scale confidently, and recover quickly when needed. When data trust is strong, AI becomes an accelerant for competitive advantage. The research shows organizations that build this foundation move fastest and compound their edge over time.

Key Terms

Data Trust — The confidence that systems, including AI, use data safely and appropriately. High trust enables free, confident use.
Data Fundamentals — Classification, governance, quality, and access controls of the underlying data estate. Strong fundamentals create a stable base.
Enforcement Gap — The difference between written AI policies and the technical ability to apply them at machine speed.
Non-Human Actors — AI agents that inherit permissions and operate at machine speed.
Security by Obscurity — The previous reliance on limited visibility into data, now replaced by full AI access.

Key Findings

The research distills seven tightly connected insights that form a single story:

  • Wide enforcement gap: 70% of organizations work to enforce policies on GenAI tools, 66% on AI agents, and 98% face at least one AI security challenge. Governance frameworks deliver clear rules that technical controls now bring to life at speed.

  • Shaky data fundamentals: AI surfaces years of accumulated data depth across unclassified files and repositories. 65% of CISOs gain visibility into what data is accessible for AI input, 68% into what data their agents access, and 41% identify shadow GenAI.

  • AI operates at machine speed: Agents access everything within reach and deliver results continuously. 90% of organizations provide broad data access to enterprise GenAI, and 32% operate with unknown agents already active.

  • AI projects deliver results when data foundations are ready: Only 1 in 5 initiatives meet intended KPIs when classification, lineage, and governance are in place. Organizations measure outcomes rather than activity alone.

  • CISOs contribute early involvement: They support AI adoption fully and translate exposure into language business leaders use to make confident decisions.

  • AI tests security fundamentals: Organizations with strong classification, identity governance, and enforcement advance projects successfully at scale.

  • High data trust accelerates advantage: Organizations with clean, classified, and governed data move fastest. They remove friction, operate agents within known boundaries, and turn security into a design partner.

These insights create a connected arc: strong enforcement on well-governed data, accessed by agents at machine speed, produces measurable success and widens the edge for prepared organizations.

What the Report Covers

The report combines quantitative data from a three-question survey of 124 CISO ExecNet members with 20 qualitative Zoom interviews of VP-level or higher security leaders at organizations with more than 1,500 employees or over $1 billion in annual revenue. It excludes federal, state, local, and education sectors. All insights represent the strongest convergence between survey statistics and practitioner narratives.

Core sections include methodology and survey findings: 90% enterprise GenAI usage, 74% approved GenAI, 59% custom agentic AI, 65% low or moderate confidence in controls. Top areas of focus: enforcing policies on GenAI tools (70%), understanding agent data access (68%), enforcing on AI agents (66%).

Seven insights presented as a connected arc. Discussion covers the root conditions, the CISO role as risk translator, the measurement approach that connects activity to outcomes, the widening advantage between prepared and unprepared organizations, and minimum viable security requirements: enterprise licensing, vendor data clarity, retention transparency, identity integration, scoped access, and defined business KPIs.

Practical recommendations: begin with visibility through comprehensive data inventory, extend identity governance to non-human actors with task-scoped controls, define success criteria before deployment, build technical enforcement at AI speed, and position security as the function that enables confident scaling.

The report focuses exclusively on real CISO experience and patterns, delivering actionable patterns for security leaders and executives.

Our Take

AI Governance Take

This MIND and CISO ExecNet report delivers a clear message: organizations that build strong data trust advance AI programs successfully at scale. With 90% running enterprise GenAI and only 20% meeting KPIs when foundations are ready, the edge goes to those who treat data trust as core infrastructure.

Governance, monitoring, and security teams gain immediate value by classifying data estates, gaining visibility into agent access, extending identity governance to non-human actors with task-scoped controls, defining outcome-based KPIs upfront, and building technical enforcement that operates at AI speed. Organizations that put these elements in place move faster, experiment more broadly, and compound their advantage.

GAIG tracks platforms in the AI Governance, AI Monitoring, and AI Data Security categories that deliver autonomous discovery, classification, runtime controls, and auditability. These solutions turn data trust from an aspiration into daily operational reality so AI programs scale with confidence and deliver measurable business results.

Related Articles

The State of AI in the Enterprise A Deloitte report Governance Research

Mar 3, 2026

The State of AI in the Enterprise A Deloitte report

Read More
ValidMind Publishes Governing Agentic AI in Financial Services Governance Research

Mar 30, 2026

ValidMind Publishes Governing Agentic AI in Financial Services

Read More

Stay ahead of Industry Trends with our Newsletter

Get expert insights, regulatory updates, and best practices delivered to your inbox