Infosys, one of the world’s largest IT services companies operating across 59 countries, has deployed an enterprise-wide AI Management System built on Infosys Topaz Responsible AI Suite and IBM watsonx.governance. The system now governs more than 2,700 AI use cases across internal operations, client-facing deployments, and third-party AI integrations at the same time. The outcomes are unusually concrete for a governance announcement: Infosys reports a 150% improvement in operational efficiency, zero downtime in governance operations, ISO 42001 certification, and alignment with both the EU AI Act and the NIST AI Risk Management Framework.
The reason this had to happen becomes fairly obvious once you look at the scale involved. Infosys was expanding AI across internal workflows and client environments faster than manual governance processes could keep up. The problem was not whether the company had policies on paper. The problem was that oversight, risk review, and regulatory alignment all had to work across thousands of active use cases, multiple geographies, and several stakeholder groups at the same time. At that scale, centralized visibility and automated assessment stop being optional and start becoming part of how the system operates.
What makes this deployment matter beyond Infosys is that the company is now taking the model to market for enterprise clients. That changes the story from internal case study to reference architecture. The system embeds AI Review Board approvals into the development lifecycle, automates risk categorization, produces real-time dashboards, and routes low-risk tools through straight-through processing instead of sending everything through manual review. It is a working example of how AI governance can move from documentation and committee oversight into operational infrastructure.
Most enterprises are still governing AI through policy documents, pre-deployment checklists, and periodic audits. Infosys is governing AI through automated workflows, dynamic risk assessment, and continuous oversight across thousands of active use cases. The distance between those two positions is where the enterprise AI governance market is forming right now, and this deployment is one of the clearest available examples of what the more mature end of that market looks like.
Key Terms
AI Management System (AIMS)
An AI Management System is the operating structure an organization uses to evaluate, approve, monitor, and govern AI systems across their lifecycle. In this case, it is the centralized system Infosys uses to manage thousands of AI use cases at scale.
IBM watsonx.governance
IBM watsonx.governance is IBM’s platform for AI governance, risk management, and compliance oversight. It helps organizations document, assess, and monitor AI systems against internal controls and external regulatory requirements.
Infosys Topaz Responsible AI Suite
Infosys Topaz Responsible AI Suite is the company’s internal responsible AI layer used to support governance workflows, model oversight, and risk management. It works with watsonx.governance to manage approvals and compliance processes across the AI portfolio.
ISO 42001
ISO 42001 is an international standard for AI management systems. It requires organizations to establish a systematic and auditable way to govern AI use, risk, accountability, and ongoing oversight.
EU AI Act
The EU AI Act is the European Union’s regulatory framework for artificial intelligence. It introduces obligations for organizations deploying certain categories of AI systems, especially where risk and accountability requirements are high.
NIST AI RMF
The NIST AI Risk Management Framework is a governance framework published by the U.S. National Institute of Standards and Technology. It provides guidance for managing AI risk as a continuous operational process rather than a one-time compliance task.
AI Review Board (AIRB)
An AI Review Board is the internal governance body that reviews and approves AI use cases before or during deployment. In Infosys’s system, AIRB approvals are embedded into the workflow rather than handled separately from development.
Straight-Through Processing
Straight-through processing refers to low-risk AI tools or use cases moving through governance workflows without manual intervention. The purpose is to reduce bottlenecks while keeping review standards in place for higher-risk systems.
Risk Categorization
Risk categorization is the process of classifying AI systems based on their level of risk, regulatory exposure, and governance requirements. In this deployment, that process is automated rather than left to case-by-case manual judgment.
AI Lifecycle Management
AI lifecycle management covers how AI systems are governed from design and development through deployment, monitoring, and retirement. A mature governance system needs to operate across all of those stages rather than at only one checkpoint.
Conditions Driving Infosys’s AI Governance System
These conditions are what forced Infosys to build governance as infrastructure rather than leave it as a review process sitting beside the work.
Enterprise AI portfolios are expanding across business units, client environments, and geographies faster than manual governance processes can review them with any consistency.
Operating across 59 countries means regulatory requirements do not line up neatly. A single system has to satisfy ISO 42001, EU AI Act expectations, and NIST AI RMF guidance at the same time without breaking governance into separate tracks.
Internal, client-facing, and third-party AI use cases carry different risk profiles, which makes one-size-fits-all review processes hard to sustain once use case volume grows.
Managing more than 2,700 active AI use cases is not a staffing problem that can be solved by adding more reviewers. At that point, governance has to be automated or the review process becomes the bottleneck.
Enterprise boards, regulators, and procurement teams are asking for proof of oversight rather than policy language alone. Audit trails, approval records, risk scores, and operational metrics are increasingly part of what large organizations are expected to produce.
ISO 42001 certification requires a systematic and auditable AI management process. That pushes governance toward a continuous operating system rather than a set of occasional reviews.
As Infosys takes this model to enterprise clients, governance maturity becomes part of the product itself. Clients are not only buying AI capability. They are evaluating whether the vendor can govern it at scale.
What Enterprise AI Governance Looks Like Before Systems Like This
Most enterprises still govern AI through policies, ethics committees, pre-deployment reviews, and occasional audits. That approach can hold together when the number of use cases is small and the system changes slowly enough for human reviewers to keep track. In those conditions, governance feels manageable because the portfolio is still limited and the review process remains visible.
The breakdown starts when AI adoption moves beyond a handful of projects and spreads across departments, products, and external deployments. Once use cases grow into the hundreds, manual review turns into a bottleneck. Different reviewers apply different standards, documentation becomes uneven, and audit trails depend too heavily on whether someone remembered to record the decision at the right moment. The process still exists, but it stops producing a complete view of the portfolio.
The harder problem is that governance often becomes siloed even before teams notice it. Internal tools may follow one workflow, client-facing systems another, and third-party integrations a third, with no shared system showing how risk is distributed across all three. Oversight still happens, but it happens in separate pockets, which makes it difficult to understand the organization’s overall posture once AI use case volume starts to climb.
That matters beyond Infosys because the scaling problem is not unique to one company. Any enterprise with a growing AI portfolio eventually runs into the same pressure: governance can either be built into the operating system of how AI is approved, deployed, and monitored, or it can be added afterward once compliance questions begin to pile up. The tools to close that gap now exist. The real divide is whether organizations are treating governance as infrastructure
What Infosys Built and How the System Operates
When you look at how this system is set up, what stands out pretty quickly is that governance is not something added after deployment. It sits directly inside the lifecycle and triggers review as work moves forward, so the AI Management System built on Infosys Topaz and IBM watsonx.governance is evaluating, approving, and tracking use cases as they are created and deployed. AI Review Board approvals are part of the workflow itself, which means governance is happening at the same time decisions are being made.
In practice, risk is not something teams assign manually on a case-by-case basis. Each use case is categorized automatically against criteria aligned with ISO 42001 and the EU AI Act, and a lot of the compliance input is generated from system data instead of being written from scratch each time. Low-risk systems move through straight-through processing without waiting for review, while higher-risk systems are routed into deeper evaluation, which ends up concentrating attention where it is actually needed.
Use cases also are not treated as a single category, which matters more than it might seem at first. Internal tools, client-facing deployments, and third-party integrations follow different governance paths, but they remain visible inside the same system, so oversight does not split across teams. That structure reflects how risk actually shows up across environments while still keeping everything connected.
Over time, what this produces is a system that can hold under real conditions rather than controlled testing. Infosys is governing more than 2,700 use cases, reporting a 150% improvement in operational efficiency, and maintaining continuous visibility through dashboards and alerts that track model behavior as it changes. Alignment with ISO 42001, the EU AI Act, and NIST AI RMF is built into how the system runs, not layered on afterward.
Infosys is now taking this system to clients, which is where it starts to matter beyond internal use, because it shows the model can operate under pressure and is now being tested in environments outside the company itself.
Our Take
AI Governance Take
Once a system reaches the point where it is managing thousands of AI use cases, governance cannot rely on review cycles or periodic checks. It has to run continuously alongside the work itself, and that is what you start to see here. Governance is not scheduled. It is part of how the system operates day to day.
What changes most is where governance sits in relation to risk. Review does not wait until deployment, and it does not depend on someone remembering to check. Risk categorization happens as systems are being used, and the workflow adjusts based on what is actually being deployed, which removes the gap between building and governing that usually creates problems later on.
At the same time, the system reduces variation in how decisions are made. Manual review tends to introduce differences between teams, reviewers, and timelines, especially as volume increases. Automation standardizes how risk is categorized and how approvals are triggered, which is what allows governance to scale without losing consistency across use cases.
There are still limits to what this system can capture. Governance applies to what enters the workflow, so anything built or deployed outside of that structure will not be visible. The model also reflects how Infosys operates internally, which means other enterprises will need to adapt it to their own regulatory environment and internal processes. The system works at scale, but it is not universal by default.
Enterprises that are still relying on documentation and periodic review are already operating with gaps that expand as their AI portfolio grows. At this point, the difference is not theoretical. It shows up in how systems are actually managed.