SR 11-7 Is Gone.
On April 17, 2026, U.S. federal regulators released SR 26-02, updated supervisory guidance on model risk management for financial institutions. It replaces SR 11-7, the guidance that has governed model validation, documentation, and oversight at banks since 2011. Fifteen years later, the industry it was written for no longer exists. Machine learning, generative AI, and agentic systems have fundamentally changed what a "model" is, how it behaves in production, and what meaningful oversight actually requires.
ValidMind, an AI model risk management platform built specifically for financial services, published this position paper within days of the guidance release. Written by Jan Larsen and Kevin Allen, the paper lays out what SR 26-02 changes in operational terms, what it leaves unresolved, and why the speed of governance modernization will now directly determine competitive position in the AI era.
The paper's core argument is direct: SR 26-02 is not primarily a compliance update. It is a competitive inflection point. The guidance introduces formal materiality constructs, narrows the practical scope of model risk management, and places greater weight on ongoing monitoring and outcomes analysis over prescriptive validation cycles. That creates a window for banks to redesign governance around actual risk rather than process burden — and institutions that move through that window first will deploy AI faster, at lower cost, and with more confidence than peers who preserve legacy governance structures.
Importantly, SR 26-02 explicitly defers guidance on generative AI and agentic AI to a future date. The Federal Reserve is still soliciting input from institutions on these topics. For banks already deploying these systems, the constraint is no longer regulatory permission — it is the organization's own ability to govern, monitor, and control AI with confidence.
Key Findings
SR 26-02 introduces a formal materiality construct that gives banks significantly more flexibility to tier governance by model risk and streamline oversight for lower-risk models — a structural change from the uniform rigor required under SR 11-7.
The shift creates a competitive race: institutions that modernize governance quickly can deploy models faster, reduce operating cost, and redeploy scarce expert resources toward high-impact decisions. Institutions that delay preserve legacy bottlenecks while peers accelerate.
Governance can become materially cheaper under SR 26-02. Banks have more freedom to automate low-materiality oversight, reduce low-value review work, and concentrate expert validation on the models and decisions that matter most.
SR 26-02 explicitly defers guidance on generative AI and agentic AI to a future date. The Federal Reserve is actively soliciting input from financial institutions on these topics, meaning banks are currently operating in a formal regulatory gap for their fastest-growing AI categories.
For teams deploying agentic AI, the constraint is no longer regulatory permission. It is the institution's own ability to govern, monitor, and control these systems with confidence — and the paper identifies this as the central operational challenge of 2026.
Agentic systems require system-level risk assessment, not component-level evaluation. They combine multiple models, tools, workflows, and decision points, and materiality can change rapidly as these systems take on broader decision authority or touch more sensitive workflows.
SR 26-02 places significantly greater weight on ongoing monitoring and outcomes analysis as practical governance tools, particularly for lower-materiality models, frequently updated models, and vendor models — shifting governance from validation-at-a-point-in-time to continuous oversight.
Third-party and vendor models remain a major risk category under SR 26-02. Institutions are still expected to understand how external AI services work, validate their outputs, and monitor performance continuously, regardless of who built the model.
The concept of effective challenge changes in practice under SR 26-02: the test is no longer whether challenge is extensively documented, but whether it leads to better outcomes and faster correction when models drift or assumptions break down.
Aggregate AI risk becomes a new enterprise priority. As AI systems increasingly share data, assumptions, and infrastructure, failures can propagate across systems — requiring portfolio-level governance visibility rather than point solutions for individual models.
SR 26-02 is not prescriptive about non-compliance in the way SR 11-7 was. The guidance states explicitly that non-compliance will not itself result in supervisory criticism, though supervisory action may still follow from violations of law or unsafe practices arising from insufficient model risk management.
The paper identifies three immediate operational priorities: remove structural delay by aligning governance effort with impact; build governance infrastructure for AI scale through stronger monitoring and inventory; and reallocate expert capacity toward the highest-risk, highest-value decisions.
What the Report Covers
Executive Summary: Four Moves That Define the Race
The paper opens by framing SR 26-02 as a strategic opportunity rather than a compliance obligation. ValidMind identifies four moves that determine which institutions win the race created by the guidance: distinguishing between models and non-models; tiering governance by model materiality; automating oversight for low-materiality models; and concentrating expert review on the highest-impact decisions. The paper states plainly that banks completing these moves quickly will deploy models faster, scale AI with more confidence, and redeploy scarce expert resources to areas that matter most. Banks that delay will remain burdened by slower decision cycles, higher structural cost, and weaker competitive agility.
Key Takeaway
"SR 26-02 is not just a regulatory update. It is a catalyst for operational and competitive transformation."
What SR 26-02 Changes and Why It Matters Now
The paper offers a clear description of what SR 11-7 looked like in practice: detailed validation cycles, broad model scope, and examiner expectations that encouraged uniform rigor across very different use cases. The implicit assumption was that all models deserved similar oversight intensity regardless of their actual risk to the institution. SR 26-02 introduces a formal materiality construct that breaks this assumption. By allowing governance effort to be calibrated to actual risk rather than prescribed process, the guidance changes the economics of model deployment in three specific ways.
First, governance can become cheaper. Banks have more freedom to reduce low-value review work, automate low-materiality oversight, and direct scarce expert resources toward the models and decisions that matter most. Second, deployment can become faster. Aligning governance effort with model materiality allows institutions to shorten approval cycles and reduce the friction that has historically slowed model and AI rollout. Third, the gap between strong and weak operators can widen quickly. Institutions that act now improve speed, efficiency, and organizational responsiveness while slower peers preserve legacy bottlenecks. The flexibility created by SR 26-02 comes with a new burden of proof: institutions need to show their approach is effective, defensible, and aligned to actual risk — and that is difficult to do with fragmented inventories, manual workflows, and limited monitoring infrastructure.
For Risk Executives
An opportunity to reduce the cost of control and stop spending premium talent on low-value governance activity. Materiality allows a shift from uniform validation to targeted, risk-aligned rigor.
For AI Governance Leaders
A signal that governance maturity is now a speed advantage. Banks that operationalize documentation, monitoring, testing, and escalation move AI from experiment to production faster.
For Agentic AI Teams
The constraint is no longer regulatory permission. It is the institution's ability to govern these systems with confidence. SR 26-02 defers formal agentic AI guidance — building capability now wins.
For Bank Executives
A competitive inflection point. The question is not whether to respond — it is how quickly the operating model can be adapted to turn SR 26-02 into a deployment and cost advantage.
The Executive Mandate: Act Now or Fall Behind
The paper makes the competitive argument explicitly. SR 26-02 reduces constraints across traditional model risk management while simultaneously increasing competitive pressure across the industry. The combination matters because governance is no longer just a control function — it is becoming a determinant of how quickly a bank can deploy models, scale AI, and respond to market opportunities. Banks that move first will benefit from faster time to market, improved pricing responsiveness, and lower operational cost. They will also be able to redeploy scarce expert resources away from low-value review work and toward high-impact business decisions.
Banks that lag will face slower innovation cycles, greater internal friction, and worsening efficiency relative to peers that modernize sooner. Over time, this becomes visible not just in process metrics but in competitive performance. The same logic applies to AI specifically: the temporary absence of formal guidance for generative and agentic AI creates a parallel race, where institutions that can confidently govern and deploy these systems gain a meaningful advantage and institutions that cannot will either delay adoption or accept higher risk.
Key Takeaway
"SR 26-02 gives banks a chance to remove structural cost and delay from model and AI deployment, and rewards the institutions that move first."
From Constraint to Efficiency
The paper dedicates a full section to the operational opportunity for risk executives specifically. SR 26-02 creates a direct opportunity to push more models into production faster, focus attention on the highest-risk decisions, require fewer resources to support the MRM function, and reduce unnecessary validation effort. The paper notes that the larger opportunity is economic: to redesign the operating model of MRM around where expert intervention actually adds value rather than where regulatory process has historically required it.
SR 26-02 does not diminish the importance of validation. It does, however, place greater relative weight on ongoing monitoring and outcomes analysis as practical tools for managing lower-materiality models, frequently updated models, and vendor models. In those cases, the objective is maintaining confidence in performance and rapidly detecting when conditions, inputs, or usage have changed enough to warrant intervention — not replicating heavy standardized validation testing. The concept of effective challenge also changes: the test is no longer whether challenge is extensively documented but whether it leads to better outcomes and faster correction when models drift or assumptions break down.
Key Takeaway
"The strategic issue is not whether banks understand SR 26-02. It is whether they can modernize governance fast enough to capture the speed, efficiency, and AI advantage it creates."
Enabling Faster, Safer Adoption
The paper frames SR 26-02 as a strong signal for AI governance leaders: the next competitive advantage will belong to institutions that can govern AI without turning governance into a deployment bottleneck. The guidance reinforces that frameworks should be risk-based, adaptive, and aligned to real business impact. But the more important implication is practical — institutions now have more room to tailor governance effort to actual risk, which makes it possible to deploy models and AI systems faster while maintaining appropriate oversight.
Third-party and vendor models remain a major source of risk, especially as organizations rely more heavily on external AI services. Institutions are still expected to understand how these models work, validate their outputs, and monitor their performance over time. Aggregate risk also becomes more important as AI systems increasingly share data, assumptions, and infrastructure — failures can propagate across systems, so enterprise-scale AI requires portfolio-level visibility, not just point solutions for individual models.
Don't Wait for the Framework
The paper dedicates specific attention to agentic AI teams and the implications of SR 26-02's explicit deferral of guidance on these systems. The paper's argument is that deferral does not reduce the need for governance — it simply places the burden on institutions to build governance that matches the actual risk of these systems using the same underlying principles of materiality, ongoing monitoring, and effective challenge that apply elsewhere.
Agentic systems do not fit neatly into traditional model boundaries. They combine multiple models, tools, workflows, and decision points, which means risk must be assessed at the system level, not only at the component level. Materiality can change quickly as these systems take on broader decision authority, touch more sensitive workflows, or influence higher-impact business outcomes. Static governance is insufficient. Institutions need a way to identify where these systems operate, evaluate how risk changes over time, and escalate oversight as exposure grows.
This changes what good control design looks like. Traditional validation language is often too narrow for agentic AI. Instead, institutions need system-level evaluation, scenario-based testing, operational guardrails, and clear intervention points when behavior deviates from expectations. Monitoring becomes a live operational capability rather than a review process. The paper concludes that the first banks to operationalize agentic AI governance will not just reduce risk — they will shorten the time between experimentation and scaled deployment. The institutions that move first will not be the ones waiting for a bespoke agentic AI rulebook. They will be the ones building operating capabilities now.
Key Takeaway
"SR 26-02 increases the advantage of banks that can govern AI at scale without making governance the bottleneck."
SR 26-02 vs SR 11-7: What Actually Changed
Dimension | SR 11-7 (2011) | SR 26-02 (2026) |
|---|---|---|
Governance scope | Broad, applied relatively uniformly across models regardless of risk | Tiered by materiality — high-impact models get deep review, low-risk models eligible for streamlined oversight |
Validation approach | Detailed validation cycles with prescriptive documentation expectations | Calibrated to model risk; greater weight on ongoing monitoring and outcomes analysis for lower-materiality models |
Effective challenge | Measured primarily by documentation completeness | Measured by whether challenge leads to better outcomes and faster correction |
GenAI / agentic AI | Not addressed (predates these systems) | Explicitly deferred to future guidance; Federal Reserve soliciting input from institutions |
Non-compliance | Could result in supervisory criticism | Non-compliance with the guidance itself will not result in supervisory criticism; action may still follow from unsafe practices |
Model inventory | Recordkeeping function | Foundation for enterprise-level visibility into concentrations, dependencies, and aggregate AI exposure |
A Window to Act
The paper closes with three strategic priorities that apply across all audiences in the post-SR 26-02 environment. First, remove structural delay from governance by aligning effort with impact, lowering the cost of control, and reducing the friction that slows model and AI deployment. Second, build governance infrastructure for AI scale through stronger monitoring capabilities, clearer inventory and metadata, and better enterprise-level visibility into dependencies, concentrations, and performance across systems. Third, reallocate expert capacity toward higher-value decisions by treating governance work proportionally to its actual risk and value rather than as uniformly important.
Notable Detail
ValidMind notes that capturing the SR 26-02 opportunity requires an operating system for risk-based governance — not just policy interpretation. The platform supports streamlined validation, stronger documentation, continuous monitoring, and enterprise visibility across traditional models and emerging AI systems, with specific automation support for low-materiality models.
Key Takeaway
"The first winners in agentic AI will not be the banks waiting for the rulebook. They will be the ones building robust governance frameworks into their operating model for agentic AI now."
Our Take
AI Compliance Take
SR 26-02 resolves a tension that has existed in financial services AI compliance for several years: the mismatch between governance frameworks designed for static, well-understood quantitative models and the operational reality of deploying machine learning systems, large language models, and autonomous agents at enterprise scale. The 2011 guidance was not wrong for 2011. It was just written for a world where "model" meant a credit scoring algorithm or a market risk calculation — not a system that updates continuously, generates its own outputs, and can take autonomous actions across connected business processes.
The most operationally significant change in SR 26-02 is not what it requires but what it permits. Tiered materiality means compliance functions can stop treating all models as equally demanding of expert attention. That is not a lowering of standards — it is an honest acknowledgment that the governance overhead applied to a low-risk internal classifier bears no relationship to the actual risk it creates. Redirecting that overhead toward the models that genuinely carry institutional risk is what good risk management has always argued for. SR 26-02 finally makes it defensible to the examiner.
The agentic AI gap is the most consequential open question in financial services AI compliance right now. SR 26-02's explicit deferral means banks deploying agentic systems are building governance programs without a regulatory template. That sounds like risk, and in one sense it is. In another sense it is the clearest opportunity available: institutions that build robust, defensible agentic AI governance now will not have to retrofit it when the framework eventually arrives. The ones that wait will be doing exactly what banks did with SR 11-7 — scrambling to build compliance infrastructure on top of systems that were already in production.