AI Infrastructure Security

DoD Expands Classified AI Work with 8 Major Companies, Notably Excluding Anthropic

The U.S. Department of Defense announced formal agreements with OpenAI, Google, NVIDIA, Microsoft, AWS, Oracle, SpaceX, and Reflection to bring advanced AI models onto classified networks for lawful operational use. The notable absence of Anthropic highlights growing tensions around governance, safety guardrails, and ethical constraints in military AI deployment.

Updated on May 02, 2026
DoD Expands Classified AI Work with 8 Major Companies, Notably Excluding Anthropic

The U.S. Department of Defense took a major step forward in integrating frontier AI into national security operations by announcing new agreements with eight prominent technology companies. The companies include OpenAI, Google, NVIDIA, Microsoft, Amazon Web Services, Oracle, SpaceX, and Reflection AI. These deals allow the deployment of advanced AI capabilities directly on the Pentagon’s classified networks for lawful operational purposes.

This expansion builds on previous partnerships and reflects the DoD’s determination to rapidly scale AI adoption while reducing dependency on any single provider. The agreements come at a time when frontier models are demonstrating increasing autonomy and capability, making them attractive for intelligence analysis, decision support, planning, and potentially agentic operations in defense contexts. By formalizing access for a diverse group of providers — spanning model developers, cloud platforms, hardware leaders, and innovative startups — the Pentagon aims to create a more resilient and innovative AI ecosystem for classified work.

Notably absent from the list is Anthropic, whose models had previously been available on classified networks. This exclusion stems from earlier disputes over safety guardrails, ethical constraints, and the level of oversight required for military use of powerful AI systems. The decision underscores how governance, alignment, and risk management considerations are now playing a central role in high-stakes AI procurement and deployment decisions at the national security level. (≈250 words)

Key Terms

  • Classified AI Networks: Secure DoD environments (secret and top-secret) where frontier AI models can be deployed for sensitive operations.

  • Frontier AI: The most advanced large language, multimodal, and agentic models with significant reasoning and autonomous capabilities.

  • Lawful Operational Use: Pentagon-approved applications of AI that comply with legal, policy, and ethical guidelines.

  • Agentic AI: Autonomous systems capable of planning, tool use, decision-making, and executing complex tasks with minimal human intervention.

  • Runtime Governance: Controls and monitoring applied during actual AI operation rather than only during pre-deployment testing.

Conditions Driving This Expansion

  • The rapid progress of frontier AI models is creating powerful new capabilities in reasoning, autonomy, and multi-step task execution that have clear national security applications.

  • Peer competitors are also advancing AI aggressively, putting pressure on the U.S. to maintain technological superiority in defense and intelligence domains.

  • Diversification of AI suppliers has become a strategic priority to avoid single points of failure, supply chain risks, and over-reliance on any one vendor.

  • There is strong demand to move AI from unclassified experimentation into real operational use on classified networks where it can deliver the greatest value.

  • Governance, safety, and ethical considerations are increasingly influencing procurement decisions, as seen in the exclusion of certain providers over alignment and guardrail disputes.

  • The rise of agentic AI systems requires new approaches to runtime monitoring, behavioral control, and accountability that go beyond traditional software security models.

  • Budget and mission pressures favor faster integration of commercial innovation, but only when providers can meet stringent security, compliance, and governance standards.

  • Broader industry trends toward agentic workflows are mirrored in defense, creating urgency to build infrastructure and partnerships that can support autonomous AI operations securely.

What Classified AI Access Looked Like Before

Prior to these new agreements, access to frontier AI models on classified networks was more limited, fragmented, and often handled through individual pilot programs or narrower contracts. While companies such as OpenAI and Google had established some presence, the process for scaling capabilities across sensitive environments involved lengthy security reviews, compliance hurdles, and heavy human oversight. Many advanced features remained restricted or unavailable on classified systems due to concerns around model behavior, data handling, and potential risks in high-stakes scenarios.

Security teams and operators frequently faced a gap between the power of commercial frontier models available in unclassified settings and what could be safely and legally used in classified environments. Assessments were often point-in-time, and continuous validation of model behavior or autonomous actions was challenging. This created bottlenecks in adoption and limited the ability of defense organizations to fully leverage the latest AI advancements for time-sensitive missions. Confidence in deployed systems relied heavily on vendor assurances and manual reviews rather than continuous runtime evidence.

What’s Changing Now

The new agreements significantly broaden and formalize access for the eight companies, enabling their frontier AI models and capabilities to be used more seamlessly on classified networks for lawful operational purposes. This includes enhanced support for intelligence analysis, decision support, simulation, and potentially agentic workflows. The inclusion of a mix of established players (OpenAI, Google, Microsoft, NVIDIA, AWS, Oracle) alongside SpaceX and the newer Reflection AI creates a diverse ecosystem that balances innovation with resilience.

The deliberate exclusion of Anthropic sends a clear signal that governance expectations — particularly around safety guardrails, alignment, and accountability for autonomous behavior — are now key factors in defense AI partnerships. By moving toward more standardized and scalable access, the DoD aims to accelerate the transition from experimentation to operational impact while maintaining strict controls. This development positions the U.S. military to better compete in an AI-driven threat environment and sets a precedent for how large organizations can responsibly integrate frontier and agentic AI at scale.

If you are deploying or evaluating frontier or agentic AI, visit the GAIG marketplace today. Compare the platforms that deliver strong runtime monitoring, adversarial defense testing, behavioral guardrails, and verifiable evidence so you can implement AI with both speed and responsible governance.

Our Take

AI Security & Governance Take

The DoD’s expansion of classified AI partnerships highlights a pivotal moment in the agentic era: frontier AI is transitioning from lab and pilot stages into core operational use in the most sensitive national security environments. The mix of companies involved and the notable exclusion of Anthropic underscore that technical capability alone is no longer sufficient — governance, runtime controls, behavioral guardrails, and alignment with organizational values have become central to deployment decisions.

For enterprise leaders, CISOs, and governance teams, this serves as a powerful case study. As your organization adopts more autonomous and agentic AI systems, the same questions around runtime observability, intervention capabilities, evidence generation, and accountability will arise. Periodic audits and static policies are increasingly inadequate. What matters now is the ability to monitor, validate, and control AI behavior in real time.

Related Articles

ServiceNow Launches Autonomous Workforce and Integrates Moveworks Into Its AI Platform AI Governance Platforms

Feb 27, 2026

ServiceNow Launches Autonomous Workforce and Integrates Moveworks Into Its AI Platform

Read More
AI Governance Platforms vs Monitoring vs Security vs Compliance AI Policy & Standards

Mar 1, 2026

AI Governance Platforms vs Monitoring vs Security vs Compliance

Read More
ServiceNow Introduces the Enterprise Identity Control Plane Following Its Acquisition of Veza AI Access Control

Mar 2, 2026

ServiceNow Introduces the Enterprise Identity Control Plane Following Its Acquisition of Veza

Read More

Stay ahead of Industry Trends with our Newsletter

Get expert insights, regulatory updates, and best practices delivered to your inbox