A 2025 EY survey found that 99% of companies in its sample reported financial losses due to AI-related risks, with an average loss of $4.4 million per company.
Manos Raptopoulos, Global President of Customer Success Europe, APAC, Middle East & Africa and member of the Extended Board at SAP, named it out loud in April 2026. Writing in the SAP News Center, he argued that the distance between near-perfect and perfect AI performance is not incremental for enterprise operations — it’s existential. His framing was precise:
“The gap between 90% and 100% is precisely where enterprise value lives. It is also where leadership is tested. The decisions you make in the coming months will determine whether AI becomes your most powerful source of durable advantage or your most expensive lesson in misplaced confidence.”
The 10% gap between 90% and 100% accuracy in an enterprise context is not a technical imprecision. It is the zone where cash flow recommendations go wrong, where supply chain executions corrupt, where customer commitments are made on bad data, where compliance positions get misrepresented to regulators. In agentic AI systems that act autonomously across production workflows, that gap does not produce a politely wrong output. It produces an executed decision at scale with real operational consequences before any human reviews it.
This piece builds on Raptopoulos’s argument with the specific operational and financial mechanics that explain why poor governance destroys margins — and what the enterprises protecting their margins are actually doing differently.
Not sure where your AI governance program has margin-exposure gaps? Submit an inquiry and GAIG will match you with the right platforms.
Submit an Inquiry →
$4.4M average financial loss per company from AI-related risks in 2025
73% of enterprise AI deployments fail to achieve projected ROI in 2026
McKinsey Global AI Survey 2026
$7.2M average sunk cost per abandoned AI initiative; large enterprises abandoned 2.3 initiatives in 2025
S&P Global Market Intelligence 2025
Key Terms
Deterministic Control
An AI system behavior that produces a predictable, consistent output given the same inputs — enforced through policy, guardrails, and data governance rather than left to the model’s probabilistic nature. Raptopoulos frames embedding deterministic control into probabilistic intelligence as the defining C-suite mandate of 2026.
Agent Sprawl
The uncontrolled proliferation of autonomous AI agents across an enterprise — deployed by different teams, under different service accounts, with overlapping and ungoverned capabilities. Raptopoulos warns this will mirror the shadow IT crises of the past decade, but with categorically higher stakes because agents act rather than suggest.
Data Foundation Moment
Raptopoulos’s term for the point at which an organization must confront whether its AI systems are operating on data that is clean, governed, and integrated enough to be trusted. Fragmented master data, siloed systems, and over-customized ERP environments are what make the 10% accuracy gap operationally dangerous.
Agent Lifecycle Management
The governance practice of managing autonomous agents across their full operational life — from deployment authorization through scope definition, continuous performance monitoring, and decommissioning. Without it, agents accumulate permissions, drift from their original purpose, and generate liability without a named human accountable for their behavior.
Margin Erosion
The compounding destruction of enterprise profit margins through failed AI projects, abandoned pilots, compliance remediation costs, agent-originated operational errors, and the opportunity cost of delayed deployment cycles. Distinguished from discrete losses — margin erosion is structural and accelerates as ungoverned agent deployments scale.
Governance Compounding
The accumulating competitive advantage that accrues to enterprises with strong AI governance — faster deployment cycles, lower failure rates, cheaper remediation, cleaner audit evidence, and higher AI ROI — relative to competitors absorbing the costs of ungoverned deployment. The gap widens every quarter.
How Weak Governance Destroys Profit Margins
The margin destruction is not happening in a single dramatic incident. It compounds across five channels simultaneously, and most finance teams are not attributing the costs to governance failures because the connection is not obvious until you map it out.
The most visible channel is abandoned AI projects. According to S&P Global Market Intelligence, 42% of companies abandoned at least one AI initiative in 2025, with the average sunk cost per abandoned initiative reaching $7.2 million. Large enterprises with more than 10,000 employees abandoned an average of 2.3 initiatives. That is roughly $16.5 million in direct sunk costs per large enterprise in one year, before accounting for the engineering time, the vendor contract costs, or the organizational momentum lost when teams spend months on projects that never reach production. The underlying cause in most cases: governance and data foundation gaps that surface at the point of scaling and halt deployment.
The second channel is the ROI gap on projects that do reach production but fail to deliver. McKinsey’s 2026 AI survey found that 73% of enterprise AI deployments fail to achieve projected ROI. A 2025 MIT Sloan study found that 61% of enterprise AI projects were approved on the basis of projected value that was never formally measured after deployment — governance and measurement infrastructure that doesn’t exist can’t prove value that exists, and value that can’t be proven doesn’t get scaled. The financial consequence is the same as abandonment, just spread across more time.
To make this concrete, consider the scenario Raptopoulos describes directly. A company running autonomous agents across supply chain and finance. The agents are operating on fragmented master data — supplier records that haven’t been cleaned, invoice matching rules that reflect a legacy ERP configuration, demand forecasting inputs that are siloed from real-time inventory. The agent produces a cash flow recommendation. It looks authoritative. Nobody has runtime visibility into what data the agent used to produce it or whether that data was reliable. The recommendation executes.
Composite Enterprise Scenario
A mid-sized manufacturing enterprise deploys an autonomous procurement agent in Q3 2025. The agent is authorized to approve supplier orders up to $500,000 without human review, operating under a service account with broad ERP access. Six weeks into production, the agent begins routing orders based on a supplier scoring model that was trained on pre-pandemic supplier performance data — nobody updated the training data when three key suppliers changed their reliability profiles after 2022 supply chain disruptions.
The orders execute correctly from a technical standpoint. The governance gap is invisible in the monitoring dashboard because the agent is operating within its authorized scope. What the dashboard doesn’t surface: the agent has approved $2.3M in orders with a supplier whose current on-time delivery rate has dropped to 61% — a fact that exists in a separate operational system that the agent was never connected to.
The operational damage — delayed production runs, expediting costs, customer delivery failures — surfaces eight weeks later. The post-incident review can reconstruct exactly what happened. It cannot answer who was accountable for ensuring the agent’s data inputs were current, because that accountability was never assigned. The remediation costs exceed the efficiency gains the agent generated in its first six months of operation.
"if an autonomous agent relies on fragmented foundations to provide a recommendation affecting cash flow, customer relations, or compliance positions, the resulting operational damage scales INSTANTLY.”
Raptopoulos
The word “instantly” is doing critical work there. Agents don’t wait for a human to review a recommendation. They execute. At the speed and scale that makes them valuable, they also make ungoverned decisions at the speed and scale that makes the damage expensive.
The third channel is compliance remediation. Regulatory frameworks now specifically require documentation of AI decision processes — EU AI Act Article 73’s post-market monitoring requirements, NIST AI RMF’s evidence generation standards. Organizations without continuous audit trail generation are not just at risk of fines. They face remediation costs to reconstruct evidence retroactively when regulators ask for it, and remediation is always more expensive than governance built into the original deployment. The fourth channel is insurance. AI-specific liability coverage is now a standard enterprise procurement consideration, and underwriters are actively pricing governance maturity into premiums. The fifth channel — the one that compounds fastest — is competitive disadvantage.
The Case Raptopoulos Is Making
Raptopoulos’s argument, published across both the AI News interview and his SAP News Center piece, builds on a specific observation about the moment enterprises are in right now. The question has shifted. AI is no longer being evaluated on novelty or capability. It is being evaluated on precision, governance, scalability, and tangible business impact. That is a fundamentally different procurement and deployment standard than the one that governed the first wave of enterprise AI adoption, and most governance programs were built for the old standard.
His core claim is that failing to govern agentic AI systems the way you govern a human workforce exposes the organization to severe operational risk. The analogy is deliberate and specific. Human employees have defined job descriptions, clear accountability structures, documented escalation paths, and performance reviews. They operate within established policies that constrain their authority. When something goes wrong, there is an organizational structure for determining who was responsible and what the response is. Agent deployments at most enterprises have none of those structural elements, and they’re executing decisions that affect cash flow, customer relationships, and compliance positions at a speed and scale that no human workforce could match.
“Governance in the age of AI is less about controlling risk at the edge and more about embedding deterministic control into probabilistic intelligence. That is a C-suite mandate, not an IT project.”
The three baseline issues Raptopoulos argues boards must resolve before deploying agentic models are worth stating precisely, because they map directly to the accountability gaps that produce the financial losses above. First: identifying who holds accountability for an agent’s error. Second: establishing audit trails for machine decisions. Third: defining the exact thresholds for human escalation. These are organizational design requirements, not technology requirements. The platform can surface the signal. Only the organizational design determines whether anyone is accountable for acting on it.
Raptopoulos also flags a dimension that most governance frameworks underweight: geopolitical fragmentation. Sovereign cloud requirements, data localisation mandates, and AI-specific regulations are now regulatory realities across every major market simultaneously — New York, Frankfurt, Riyadh, Singapore. Enterprises deploying agents globally are not navigating one governance framework. They are navigating multiple overlapping frameworks with different requirements, different enforcement mechanisms, and different audit evidence standards. That complexity is a governance problem that scales with agent deployment and compounds the margin destruction from the channels above when compliance positions get misrepresented across jurisdictions.
Agentic AI Makes the Margin Problem Categorically Worse
The governance-to-margin connection existed before agentic AI. Bad models producing bad recommendations cost enterprises money. Ungoverned deployments created compliance exposure. Poor data quality degraded AI ROI. Those are real costs and they’ve been accumulating for years.
Agentic AI changes the cost structure in a specific way: it removes the human review step between the AI’s decision and the executed outcome. In the pre-agentic model, an AI system produces a recommendation. A human reviews it. The human makes the decision. The governance risk is that the human acts on a bad recommendation without adequate context or that the recommendation reflects a model with ungoverned drift. That risk is real but bounded — the human is still the actor, and human decisions have existing accountability structures built around them.
An autonomous agent produces a decision and executes it. The human’s role is no longer reviewer before the action — it’s auditor after the fact. At the speed and volume that makes agentic AI economically compelling — hundreds or thousands of decisions per hour across production workflows — post-hoc audit is not a governance mechanism. It’s an incident report.
The non-deterministic behavior problem compounds this. Traditional software executes deterministically — the same inputs produce the same outputs. AI models, and agentic systems in particular, produce outputs that are probabilistic. Two identical inputs can produce different outputs. An agent that handles a procurement decision correctly 9,000 times can mishandle it on the 9,001st under slightly different context conditions. The 10% accuracy gap Raptopoulos describes isn’t a fixed miss rate — it’s a tail risk that materializes unpredictably across large volumes of decisions. At enterprise scale, that tail risk is continuous margin exposure.
The Competitive Compounding Problem
The FOMO argument that drives enterprise AI spending — Goldman Sachs’s research found that this FOMO has proven a stronger incentive than poor stock performance for hyperscaler AI investment — cuts both ways. The enterprises that govern well are deploying faster because they fail cheaper, recover quicker, and build audit-ready evidence as a byproduct of operational deployment rather than as a retroactive remediation project. The governance gap between a well-governed competitor and a poorly governed one compounds quarterly. Every abandoned initiative, every compliance remediation project, every delayed deployment cycle is a quarter where the gap widens. That is the financial case for governance that the $4.4M average loss per company understates — because it captures the direct cost but not the compounding competitive disadvantage.
Raptopoulos identifies agent sprawl as the organizational failure mode that makes this worse: the uncontrolled proliferation of autonomous agents deployed by different teams, under different credentials, with ungoverned capability overlap. The shadow IT analogy he uses is precise. Shadow IT created technical debt, security exposure, and compliance gaps that enterprises spent years remediating. Agent sprawl replicates that pattern with systems that take autonomous actions in production environments, often under service accounts with broader permissions than any individual human employee would be granted. The remediation cost, when it arrives, is proportionally larger.
What Margin-Protecting AI Governance Actually Looks Like
The enterprises closing the 73% ROI failure gap share specific operational characteristics. These are not philosophical commitments to governance as a value — they are technical and organizational infrastructure choices that produce measurably different deployment outcomes.
Runtime Observability Connected to Production Systems
The governance-to-margin connection requires observability that operates at inference time — capturing what agents actually do in production, not what they were configured to do in staging. This means behavioral signal capture across agent decision chains, context quality monitoring for the data inputs agents are operating on, and drift detection that fires before an agent’s behavior has diverged far enough to produce a financial consequence.
Raptopoulos’s data foundation moment is the operational reason this matters. Fragmented master data doesn’t announce itself as a governance problem — it looks like normal enterprise data complexity until an agent relies on it for a decision that affects cash flow. Runtime observability is what surfaces the data quality signal before the decision executes, not after the damage is done. The monitoring signal framework for this covers context quality, data freshness, and input reliability — the signals that determine whether an agent’s decision is grounded in trustworthy inputs.
Platforms with runtime observability for agentic AI
Arize AI Fiddler AI Arthur AI Superwise ModelOP
Behavioral Guardrails That Enforce Policy at Inference Time
Deterministic control embedded into probabilistic intelligence — Raptopoulos’s framing — is operationally a guardrail problem. Policy requirements that live in documentation don’t constrain agent behavior. Controls that execute at inference time do. The technical requirement is guardrails that restrict agent action scope, enforce authorization boundaries, and prevent the execution of decisions that fall outside defined policy parameters — before those decisions produce financial consequences, not after.
This is where the governance platform capability that separates Level 3 platforms from their predecessors is most visible. A platform that connects policy to production and enforces it at inference time can catch the agent operating on stale supplier data before it approves the $2.3M in orders. A platform that documents the policy and checks compliance after the fact catches the gap in the post-incident review, which is too late.
Platforms with runtime policy enforcement and behavioral guardrails
Credo AI Monitaur Holistic AI Trustible ValidMind ServiceNow
Continuous Validation and Agent Lifecycle Management
Raptopoulos’s agent lifecycle management requirement addresses the Permission Creep Drift problem — the gradual expansion of agent capabilities and access permissions over time, without corresponding reviews of whether that expanded scope is still appropriate. Agents don’t stay within their original deployment parameters indefinitely. They accumulate OAuth grants, their data access expands as integrations are added, and their operational context changes as the enterprise environment around them shifts.
Continuous validation means the governance program doesn’t end at deployment authorization. It runs ongoing checks against the agent’s current behavior, current permission scope, and current data inputs — comparing them against the original governance documentation and flagging deviations before they produce operational consequences. This is what closes the gap between what Raptopoulos describes as policy and what compliance theater platforms fail to provide: ongoing technical enforcement rather than a one-time documentation exercise.
Platforms with agent lifecycle management and continuous validation
ModelOP, Credo AI, Relyance AI, Saidot, Enzai, Holistic AI
Audit Trail Generation That Meets Regulatory Standards
Raptopoulos’s second baseline requirement — establishing audit trails for machine decisions — is a compliance cost management issue as much as a governance one. Organizations that generate continuous, structured audit evidence from agent decision chains as a byproduct of production operations face a fundamentally different regulatory examination cost than organizations that reconstruct evidence retroactively when a regulator asks for it.
The regulatory frameworks now requiring AI audit documentation — EU AI Act Article 73, NIST AI RMF, ISO 42001 — were not written to be satisfied retroactively. They assume ongoing evidence generation from live systems. Enterprises in regulated industries — financial services under FFIEC, healthcare under HIPAA, EU-based operations under GDPR — face the compound compliance exposure of AI-specific requirements layered on top of existing frameworks, all demanding evidence that ungoverned deployments cannot produce without expensive reconstruction projects.
Platforms with structured audit trail generation for AI decisions
Vanta, Norm AI, Monitaur, ValidMind, Adeptiv AI, VerifyWise, Trustible
What This Framework Doesn’t Resolve
The governance-to-margin argument is real and well-supported by the data above. Two honest limitations on the framing are worth stating directly.
First, governance is necessary but not sufficient for AI ROI. The 73% ROI failure rate reflects a range of failure causes — poor use case selection, inadequate data foundations, change management failures, and governance gaps. Governance platforms address the governance dimension. They don’t fix use cases that were wrong from the start or data foundations that were never cleaned. Raptopoulos is precise about this: the data foundation moment is a prerequisite to governance, not a downstream consequence of it. Organizations that haven’t addressed their data fragmentation problem will find that adding governance infrastructure surfaces the data problems more visibly but doesn’t resolve them.
Second, the competitive compounding argument assumes that governance investment translates directly into deployment speed advantage. That is true for organizations that have already resolved their data foundation issues and are operating at meaningful AI deployment scale. For organizations still in early pilot phases with narrow, low-stakes deployments, the governance investment required to close all the gaps above is not proportionate to the current risk. Build the governance infrastructure before you scale — but the urgency scales with deployment complexity, not with the aspiration to deploy.
Related Reading
Our Take
AI Governance Take
Raptopoulos is right that this is a C-suite mandate, not an IT project. But the reframe that makes it actionable at the board level is the one the financial data supports: governance is the difference between AI that compounds competitive advantage and AI that compounds cost. The $4.4M average annual loss per company, the $7.2M per abandoned initiative, the 73% ROI failure rate — these are not compliance costs. They are operational costs that sit on the P&L whether or not anyone has labeled them as governance failures.
The enterprises that are pulling ahead in the agentic era are not the ones with the most sophisticated models or the largest AI infrastructure budgets. They are the ones that govern with enough precision to deploy confidently, fail cheaply when something goes wrong, and generate the audit evidence that makes regulatory examination an operational cost rather than an emergency. Every quarter that gap compounds. The organizations building the governance infrastructure now — runtime observability, behavioral guardrails, agent lifecycle management, continuous audit trail generation — are the ones whose AI programs will look fundamentally different from their competitors’ programs in 2028.
That is what Raptopoulos means by “your most expensive lesson in misplaced confidence.” The confidence is in the technology. The misplacement is in believing the technology governs itself.
Evaluate AI governance platforms built for agentic deployment in the 2026 governance platform guide — or use the complete vendor interview guide to run structured procurement evaluation. If you’re not sure where your current governance program has gaps, submit an inquiry and GAIG will match you with platforms evaluated specifically for your deployment environment and regulatory obligations.