Pillar, a leader in runtime AI protection, and TrueFoundry, a leading AI platform for enterprises, have announced a native integration that embeds Pillar’s security capabilities directly into TrueFoundry’s AI Gateway. This allows organizations to enable real-time protection for every AI request flowing through the gateway with a simple configuration change — no additional infrastructure or application code modifications required.
TrueFoundry’s AI Gateway serves as the central routing layer for AI traffic, directing requests to models from OpenAI, Anthropic, AWS Bedrock, Azure, open-source models, or self-hosted deployments. With the integration, Pillar’s policy engine now inspects both incoming prompts and outgoing responses in real time, enforcing controls before traffic reaches any downstream model. The combined solution delivers uniform protection across all AI workloads while maintaining the low-latency routing that TrueFoundry is known for.
The partnership addresses a common enterprise pain point: organizations want both a robust AI gateway for traffic management and strong runtime security, but they have traditionally had to choose between them or add complex layers. Pillar and TrueFoundry now offer both capabilities in one unified deployment path, with a single place to configure policies and audit decisions.
Key Terms
AI Gateway — TrueFoundry’s central routing layer that manages traffic to multiple AI models and providers with minimal latency.
Runtime AI Protection — Pillar’s real-time inspection and enforcement of security policies on AI prompts and responses.
Prompt Injection & Jailbreak Protection — Detection and blocking of attacks that attempt to manipulate AI behavior through crafted inputs.
Data Leakage Prevention — Identification and redaction of sensitive information (PII, PCI, secrets) in prompts or model outputs.
Conditions Driving This Change
Several converging forces are making native runtime protection inside the AI gateway a must-have for enterprises.
Organizations are routing AI traffic through centralized gateways to manage multiple models and providers efficiently, but these gateways have historically lacked strong security controls.
The rise of agentic AI workflows means requests often involve multi-turn conversations, tool calls, and retrieved context, making single-turn security insufficient.
Prompt injection, jailbreaks, data exfiltration, and unsafe outputs remain persistent threats that can bypass traditional guardrails when not inspected in real time.
Security and platform teams want to avoid adding extra hops or latency when securing AI traffic.
Compliance and audit requirements demand consistent policy enforcement and logging across all AI interactions.
Enterprises need the ability to apply tiered policies (strict for production, more permissive for prototyping) without complex custom code.
The speed of AI adoption is outpacing the ability of separate security layers to keep up with evolving threats and workflows.
These pressures created the exact need for a seamless integration that combines gateway routing with runtime protection in one place.
What Security Looked Like Before
Before this integration, enterprises typically deployed an AI gateway for routing and traffic management while using separate security tools or custom guardrails for protection. This created multiple points of friction: additional latency from chaining services, inconsistent policy enforcement across models, and fragmented logging that made auditing difficult.
Security teams often had to choose between performance and protection. Adding runtime security layers frequently introduced complexity, required application code changes, or created maintenance overhead. In agentic workflows involving chained tool calls and session context, many solutions could only inspect individual turns, missing sophisticated multi-step attacks. The result was a patchwork approach where visibility and control were incomplete, and teams spent significant time managing disparate systems instead of focusing on actual risk reduction.
What’s Changing Now
The Pillar and TrueFoundry integration changes the deployment model. Pillar’s runtime protection is now natively built into TrueFoundry’s AI Gateway. Customers simply add their Pillar API key and policy profile in the Guardrails Group configuration, and protection is applied uniformly to all requests and responses flowing through the gateway.
The solution inspects full conversation and session context, making it effective for agentic workflows. It blocks prompt injection (including indirect and multi-turn attacks), detects PII/PCI/secrets leakage, performs content moderation, identifies reconnaissance and evasion attempts, and enforces custom policies. Verdicts (allow or block) are applied before routing to any model, and every scan is logged for audit and tuning.
The integration supports progressive rollout, allowing teams to start with specific models, teams, or environments. Combined with TrueFoundry’s per-request tracing, organizations gain complete end-to-end visibility and audit coverage without added latency or complexity.
Our Take
AI Security Take
Pillar and TrueFoundry’s native integration delivers a practical and much-needed solution for enterprises running AI in production. By embedding runtime protection directly into the AI Gateway, organizations no longer have to choose between efficient traffic routing and strong security — they can have both with a single configuration.
This approach is especially valuable for agentic AI workloads, where threats often hide in multi-turn context or chained tool calls. The ability to inspect full sessions, enforce consistent policies, and maintain comprehensive audit logs gives security teams the controls they need without slowing down innovation.
If you’re building or scaling AI applications and need robust runtime protection across models and providers, go to the GAIG marketplace right now. There you can compare the platforms and vendors that deliver integrated gateway routing with real-time AI security and governance capabilities.