AI Infrastructure Security

NVIDIA GTC 2026 Partnerships: CrowdStrike, Arize AI, ServiceNow, TrendAI, DataRobot, H2O.ai, and Mistral AI

NVIDIA GTC 2026 marked a turning point for enterprise AI. Security, governance, and monitoring vendors aligned around a shared infrastructure layer, signaling how AI systems will be controlled moving forward.

Updated on March 17, 2026
NVIDIA GTC 2026 Partnerships: CrowdStrike, Arize AI, ServiceNow, TrendAI, DataRobot, H2O.ai, and Mistral AI

Enterprise teams are deploying AI systems across products, workflows, and internal operations faster than they can fully account for them. Models are being connected to APIs, agents are interacting with tools, and inference is happening across both cloud and on-prem environments. The systems are expanding, but the controls around them are still catching up.

At NVIDIA GTC 2026, that gap became visible through a coordinated set of partnerships across CrowdStrike, TrendAI, Arize AI, ServiceNow, DataRobot, H2O.ai, and Mistral AI. Each company approached a different part of the problem, but all of them aligned around the same place where AI systems now run: NVIDIA’s infrastructure layer.

The reason is straightforward. Enterprise adoption is accelerating faster than governance coverage. Recent research shows that 94% of enterprises expanded their AI footprint this year, while only 66% formally test the majority of those systems. As more AI workloads move onto NVIDIA-powered environments, vendors building security, monitoring, and governance tools need to operate directly inside that infrastructure to remain relevant.

What makes this moment different is how these capabilities are being positioned. Security validation is being introduced before deployment using digital twin simulations. Monitoring is connecting directly to inference endpoints through services like NIM. Governance controls are being embedded into agent runtimes rather than applied after the fact. These are not separate integrations. They represent a shift in where control sits in the AI lifecycle.

NVIDIA is positioning its infrastructure as the layer enterprise AI governance is built on. The compliance and audit implications of that shift are still developing, but the direction is clear. GAIG is tracking how these systems come together because enterprise teams need a clear view of how security, governance, and monitoring now operate as part of the same stack.

Key Terms Used Across NVIDIA GTC 2026 Announcements

AI Factory

An AI factory refers to the infrastructure environment where AI models are trained, deployed, and operated at scale, including compute, data pipelines, and orchestration systems.

NIM (NVIDIA Inference Microservices)

NIM is NVIDIA’s framework for deploying and serving AI models as standardized inference endpoints, allowing enterprises to run models consistently across environments.

Agent Runtime

The agent runtime is the execution layer where AI agents operate, interact with tools, call APIs, and make decisions in real time within enterprise systems.

OpenShell

OpenShell is NVIDIA’s framework for managing how AI agents interface with external systems, defining how actions are executed and controlled.

AI-Q Blueprint

AI-Q is a reference architecture that defines how AI agents are structured, secured, and deployed within NVIDIA-based environments.

Digital Twin Security Validation

Digital twin validation allows enterprises to simulate their AI infrastructure in a virtual environment and test security controls before deploying systems in production.

Secure-by-Design

Secure-by-design refers to building security controls directly into systems at the architecture level rather than adding them after deployment.

Conditions That Drove the NVIDIA GTC 2026 Partnerships

These announcements did not happen in isolation. They are a response to a set of pressures that are already shaping how enterprise AI systems are deployed, secured, and evaluated in production environments.

  • AI agents are now operating inside production environments where they interact with internal systems, external APIs, and enterprise data without human intervention at each step

  • Traditional security tooling was designed to monitor static applications, not systems that generate actions, call tools, and make decisions across multiple steps in real time

  • Inference is increasingly moving on-premises due to data residency, regulatory requirements, and enterprise control over sensitive data, bringing security and monitoring requirements closer to the infrastructure layer

  • The application and agent layer now carries the highest concentration of AI-related attacks, accounting for 51% of reported incidents according to recent research

  • Prompt injection attacks increased 540% in 2025, expanding the attack surface beyond model outputs into inputs, tool usage, and system behavior

  • Governance frameworks such as NIST AI RMF and regulatory pressure from the EU AI Act are pushing organizations to demonstrate how AI systems are controlled at the system level, not just documented in policy

  • Security leaders are already responding to this shift, with 98% planning to increase the number of testing methods applied to AI systems over the next year

What Enterprise AI Security Looks Like Before These Partnerships

Most enterprise AI security today, in practice, is still built around documentation, review cycles, and internal approval processes rather than direct control over how systems actually behave once they are running in production environments. Teams define policies, run periodic audits, and rely on security checklists to confirm whether a system meets internal or external requirements, which, to be fair, worked reasonably well for traditional software where behavior tends to remain stable and changes are easier to trace over time.

AI systems introduce a very different operating reality that those processes were not designed to handle. Models respond differently depending on inputs, agents execute multi-step actions across tools and APIs, and integrations expand as new capabilities are layered in during development. As a result, security teams are often in a position where they are asked to evaluate systems that do not have a complete or continuously updated record of how they actually operate, especially when AI features are being introduced quickly across active production workflows.

In practice, this starts to show up as a visibility problem that is harder to close than most teams expect. Many organizations cannot confidently produce a full inventory of which models are running, how those models interact with internal data, or where decisions are being made inside agent workflows. On top of that, shadow usage continues to grow as teams experiment with tools that never pass through formal security review, so by the time questions surface, the systems are already live and connected to real business processes.

The infrastructure layer reflects the same pattern, just at a deeper level. Monitoring tools are often not connected directly to inference endpoints, which means visibility into real-time behavior is limited and sometimes delayed. Governance controls are applied after deployment instead of being built into how agents actually execute tasks, and security validation typically happens once systems are already live rather than before they are introduced into production environments where the impact is immediate.

As AI systems continue to scale across more workflows and more environments, these gaps become increasingly difficult to manage in a consistent way. Organizations end up operating with security layers that rely on delayed signals while the systems themselves are making decisions and taking actions in real time, which creates a disconnect between how systems behave and how they are being overseen that these partnerships are now trying to close.

How Each NVIDIA GTC 2026 Partnership Actually Changes Enterprise AI Systems

When you look at these announcements side by side, what stands out pretty quickly is that each company is solving a different part of the same underlying problem, but they are all doing it inside the same infrastructure layer. That alignment is what makes this set of partnerships more important than a typical integration announcement, because it starts to define how control is distributed across security, monitoring, and governance as systems move into production.

AI Security: What Changes When Security Moves Into the Agent Runtime

CrowdStrike’s integration with NVIDIA is one of the clearest examples of what it means to move security closer to where AI systems actually operate. By embedding Falcon directly into NVIDIA agent architectures through AI-Q and OpenShell, enterprises are no longer relying only on external monitoring or post-incident analysis to understand what agents are doing.

  • Falcon protection is applied at the runtime level, which allows teams to observe and respond to agent behavior as actions are being executed

  • Prompt injection detection becomes part of the execution flow rather than something identified after outputs are generated

  • Agent misuse and abnormal behavior can be flagged in context, not just through logs or delayed signals

What this effectively does, in practice, is shift security from a reactive layer into something that sits alongside the system as it operates. That matters because agents are not just producing outputs, they are taking actions across tools and systems, so the point of control has to exist at that same level of execution.

TrendAI’s work with NVIDIA DSX Air adds a different dimension to this, which is the ability to test security before systems are even deployed. Using digital twin simulation, enterprises can model their AI factory environments and run validation scenarios against them without exposing real infrastructure.

  • Vision One AI Factory EDR operates at the infrastructure level through BlueField DPUs, giving visibility into system behavior inside the simulated environment

  • TippingPoint provides network-level protection, allowing teams to test how attacks would propagate across connected systems

In a very practical sense, this means security teams can run scenarios against their AI systems before those systems exist in production, which is a different posture than validating risk after deployment. It changes how teams think about readiness, because the system can be evaluated in conditions that closely resemble real-world operation.

AI Monitoring: What Changes When Observability Connects Directly to Inference

Arize AI’s integration with NVIDIA NIM focuses on what happens after deployment, but it does so in a way that removes a step that most teams currently have to manage manually. By supporting NIM natively within Arize AX, models that are deployed through NVIDIA’s inference layer can be connected directly to monitoring and evaluation workflows.

  • Models deployed via NIM can be observed without additional instrumentation, reducing the gap between deployment and visibility

  • Continuous evaluation is tied directly to production behavior, allowing teams to track how models perform as inputs and usage patterns change

  • The feedback loop into NeMo enables fine-tuning based on real production data, which keeps models aligned with current conditions

For teams operating under data residency or compliance requirements, this also creates a path where both inference and monitoring can remain inside controlled environments. That becomes relevant as more organizations move workloads on-prem or into private cloud configurations where external tooling is not always an option.

DataRobot’s integration with Nemotron-3 Super sits in a similar monitoring space, but with a stronger focus on MLOps workflows and model performance tracking across deployment pipelines. It reinforces the idea that monitoring is no longer a separate layer, but something that needs to exist as part of how models are deployed and managed over time.

AI Governance: What Changes When Control Moves Into the System Itself

ServiceNow’s Autonomous Workforce, built on NVIDIA’s Agent Toolkit and Nemotron models, reflects how governance is starting to move into the execution layer of AI systems. Through AI Control Tower, enterprises can define policies, monitor agent behavior, and maintain audit trails within the same environment where those agents are operating.

  • Governance policies are applied directly to agent actions, rather than being enforced through external review processes

  • Audit trails are generated as part of execution, which provides a record of how decisions were made and actions were taken

  • Oversight becomes continuous, instead of something that happens during scheduled compliance checks

This changes how governance is implemented, because it moves from documentation and reporting into something that is tied to system behavior in real time. For organizations dealing with regulatory requirements, that shift is important because it aligns more closely with expectations around continuous monitoring and accountability.

H2O.ai’s work with NVIDIA RunAI and AIQ extends this idea into long-running agent systems, where governance needs to account for how agents evolve and operate over extended periods of time. The focus here is less on single interactions and more on lifecycle management, which becomes relevant as agents are given more autonomy across workflows.

Mistral AI’s partnership with NVIDIA introduces a different angle through its emphasis on digital sovereignty and controlled deployment environments. While it does not present a standalone governance system, it contributes to the broader requirement that AI systems can be operated within defined regulatory and geographic boundaries.

Taken together, these partnerships show a consistent direction. Security is moving into the runtime, monitoring is connecting directly to inference, and governance is being embedded into how systems execute. That alignment is what makes this moment structurally important for enterprise AI.

What These Partnerships Change for Enterprise Teams in Practice

When you step back and look at how these integrations actually affect day-to-day operations, what becomes clear fairly quickly is that each function inside the enterprise stack is being forced to adjust where and how it interacts with AI systems. These are not incremental tooling upgrades. They change when teams intervene, what they can see, and how early they can act.

For security teams, the shift shows up in when validation happens and how close controls sit to execution. With TrendAI’s digital twin integration, for example, teams can now simulate AI factory environments and run security scenarios before anything is deployed into production. That means red team exercises, attack simulations, and infrastructure-level validation can happen in a controlled environment that still reflects how the system will behave once it is live. At the same time, CrowdStrike embedding Falcon into the agent runtime means security is no longer waiting for logs or alerts after the fact, but instead observing and responding to actions as they occur. In practice, this moves security earlier in the lifecycle and closer to execution, which is a different operating model than most teams are used to.

For monitoring teams, the change is less about new signals and more about how quickly those signals become available. Arize connecting directly to NVIDIA NIM removes the step where teams would normally have to instrument models after deployment just to get visibility into production behavior. Now, the moment a model is deployed through NIM, it can be observed, evaluated, and fed back into a tuning loop through NeMo. That shortens the time between deployment and insight, which matters because model behavior can drift quickly as inputs and usage patterns evolve. For organizations with strict data residency requirements, the ability to run both inference and monitoring within the same controlled environment also reduces the need to move data across boundaries, which has been a consistent blocker for adoption.

For governance teams, the shift is happening at the level of control itself. ServiceNow’s AI Control Tower governing NVIDIA-powered agents means that policies, audit trails, and oversight mechanisms are being applied during execution rather than reconstructed after the fact. That changes what governance looks like in practice, because instead of reviewing reports or documentation, teams are now working with systems that record how decisions are made as they happen. As regulatory frameworks continue to emphasize continuous monitoring and accountability, this kind of integration becomes less of a feature and more of a requirement.

Across all three functions, the common pattern is that control is moving closer to the system and earlier in the lifecycle. Security validates before deployment and operates during execution. Monitoring connects directly to inference instead of being layered on afterward. Governance is embedded into how systems run rather than applied through external review. For enterprise teams, the immediate implication is that existing workflows will need to adjust, because the tools are no longer operating at a distance from the systems they are meant to oversee.

In practical terms, the next step for most organizations is to understand where their current AI systems sit relative to this infrastructure. That means identifying which workloads are already running on NVIDIA environments, which of these integrations are available today versus still developing, and where gaps still exist between system behavior and system oversight. Teams that expand their agent footprint without addressing those gaps are likely to run into the same visibility and control issues that these partnerships are trying to resolve.

Our Take

When this many vendors across security, monitoring, and governance align around the same infrastructure provider at the same moment, it usually signals that the market is settling on where control will actually live. NVIDIA is not just hosting workloads here. It is becoming the place where those workloads are evaluated, observed, and governed, which gives it influence over how enterprise AI systems are built and how they are controlled once they are running.

What stands out in these partnerships is that the direction is consistent across categories. Security is being designed into the system before it reaches production and remains present during execution. Monitoring is connected directly to inference instead of being layered on after deployment. Governance is moving into the runtime so that policies and audit trails are generated as systems operate, not reconstructed later. These are practical shifts that line up with what enterprise teams have been trying to do but have not had the infrastructure support to execute consistently.

There are still gaps that these integrations do not fully address. Most of this control is applied to systems that enterprises build and run within their own environments, which means third-party models, external agents, and cross-vendor interactions can still fall outside of that visibility. Shadow usage does not disappear simply because infrastructure becomes more integrated, and coordination across organizational boundaries remains a challenge that no single vendor stack can solve on its own.

For enterprise teams, the takeaway is fairly direct. Start by identifying which of your AI systems already run on NVIDIA infrastructure and which ones are likely to move there. From there, evaluate which of these security, monitoring, and governance capabilities are available today versus still in development, and prioritize closing the gaps that sit closest to production risk. Expanding agent usage without addressing those layers will increase exposure faster than most teams can manage.

GAIG will continue to map how these vendors fit together as this layer develops, because the question is no longer which individual tool to choose, but how the entire control stack comes together around the systems that enterprises are actually running.

Related Articles

ServiceNow Launches Autonomous Workforce and Integrates Moveworks Into Its AI Platform Governance Platforms

Feb 27, 2026

ServiceNow Launches Autonomous Workforce and Integrates Moveworks Into Its AI Platform

Read More
AI Governance Platforms vs Monitoring vs Security vs Compliance Governance Platforms

Mar 1, 2026

AI Governance Platforms vs Monitoring vs Security vs Compliance

Read More
BigID and Atlan Launch Unified Structured and Unstructured Data Catalog for AI Governance at Gartner Data & Analytics Summit Governance Platforms

Mar 10, 2026

BigID and Atlan Launch Unified Structured and Unstructured Data Catalog for AI Governance at Gartner Data & Analytics Summit

Read More

Stay ahead of Industry Trends with our Newsletter

Get expert insights, regulatory updates, and best practices delivered to your inbox