SAP and Uptycs announced new verifiable AI cybersecurity capabilities designed to strengthen how enterprises secure artificial intelligence systems operating inside production environments. The announcement introduces new tooling centered around the companies’ Juno platform and a “Glass Box” architecture approach intended to make AI behavior observable inside enterprise environments.
Traditional cybersecurity tools were designed to monitor software applications and user behavior. Autonomous and semi‑autonomous AI systems operate differently and often fall outside the visibility of those tools. As organizations begin deploying assistants, copilots, and automated decision systems that interact directly with internal data and enterprise infrastructure, security teams often lack visibility into what those systems actually accessed or why a particular output was generated.
The control challenge becomes more serious as AI systems connect to internal databases, APIs, customer records, and operational workflows. These integrations increase the usefulness of AI tools but also expand the potential consequences of incorrect outputs, unintended data access, or automated actions executed across enterprise systems.
SAP and Uptycs are positioning their Juno platform and Glass Box architecture as a response to that visibility gap. By linking AI outputs to traceable data sources, telemetry, and evidence logs, the platform attempts to give security teams a way to verify how an AI system reached a particular conclusion or action inside a production environment.
Key Terms Used Throughout the Article
Before examining the structural pressures behind this announcement, several terms used throughout the article benefit from clear definitions. These concepts appear frequently in discussions about AI security and governance but are often interpreted differently across organizations.
Verifiable AI
refers to security and monitoring capabilities that allow organizations to confirm how artificial intelligence systems behave while operating. These systems record how AI models access data, interact with enterprise applications, and generate outputs so that security teams can review and verify the actions taken by the system.
Glass Box Architecture
refers to an AI system design approach where every output the system generates is linked to the specific data sources, logs, and evidence that produced it. Unlike opaque AI systems that produce conclusions without traceable reasoning, Glass Box systems allow security teams to follow the exact chain of evidence behind every finding generated by the AI.
Runtime
describes the period when an AI system is actively operating and interacting with data, users, or other software systems. Security controls that function during runtime observe the behavior of the AI system while it performs tasks rather than only analyzing it before deployment.
Enterprise Infrastructure
refers to the internal systems organizations rely on to operate their business. This includes databases, internal applications, APIs, servers, and cloud services that store, process, and transmit company information.
Prompt Injection
is a technique used to manipulate AI systems by providing specially crafted inputs that cause the model to ignore its intended instructions and perform unintended actions, such as revealing sensitive information or executing restricted tasks.
Enterprise AI Systems Are Creating Security Blind Spots
The launch of verifiable AI security capabilities by SAP and Uptycs comes as organizations push artificial intelligence systems deeper into day‑to‑day business operations. As AI systems begin interacting with internal applications, databases, and workflows, security teams are discovering that many existing security tools were never designed to observe or verify the behavior of these systems.
Several pressures are driving demand for stronger verification and monitoring of enterprise AI activity.
Enterprises are integrating AI systems directly into internal databases, APIs, and operational workflows, which increases the potential impact of unexpected or incorrect system behavior.
Security teams often cannot clearly see how an AI system retrieves data, generates outputs, or interacts with internal infrastructure while it is operating.
New attack techniques such as prompt injection and model manipulation create risks that traditional cybersecurity monitoring tools were not designed to detect.
Internal audit teams and regulators increasingly expect organizations to show how AI systems are monitored, secured, and documented within production environments.
AI systems embedded inside enterprise software can access sensitive business information, which raises concerns about data exposure and unintended system actions.
Organizations are deploying AI faster than they are building governance and security oversight, creating gaps between adoption and control.
AI agents operating autonomously across enterprise tools can trigger actions, access systems, and modify configurations without human review, creating exposure that only surfaces after the action has already occurred.
These pressures are forcing enterprise software vendors and cybersecurity providers to rethink how AI systems are monitored once they are deployed. Verification technologies are emerging as one approach for giving organizations clearer visibility into how AI systems behave while operating inside enterprise infrastructure.
Runtime Verification Gives Security Teams Visibility Into AI System Behavior
Verifiable AI security systems focus on what happens while an AI system is actively operating. During runtime the system interacts with real users, internal data sources, and enterprise applications, which creates the need for clear observation of its behavior.
Consider a typical enterprise scenario. A company deploys an AI assistant connected to internal knowledge bases and customer records. Employees ask the system questions and the assistant retrieves information from several internal databases to generate answers. Without runtime visibility, security teams cannot determine which data sources the system accessed, how the response was produced, or whether sensitive information was exposed during the interaction.
Verification systems attempt to solve this problem by recording the actions taken by the AI system as they occur. These records can show which databases were accessed, what information was retrieved, which applications were contacted, and how outputs were produced. The Juno platform structures this activity using a unified ontology that maps roughly 150,000 telemetry columns across systems, allowing the platform to connect AI behavior to specific infrastructure interactions.
These records allow security teams to investigate unusual activity, reconstruct how an AI system reached a particular output, and trace whether sensitive data or systems were involved in the process. The activity logs also provide documentation that can support internal audits or post‑incident reviews when organizations must explain why an AI system accessed a resource or triggered a particular action.
Security Authority Expands As AI Systems Gain Access To Enterprise Data And Workflows
As AI systems begin interacting with internal applications and sensitive company data, responsibility for overseeing those systems spreads across several teams inside the organization. Security teams often monitor runtime behavior and investigate unusual activity, while data science teams remain responsible for model design, training, and performance management.
Governance and risk teams define the policies that determine which AI systems can be deployed, what data they are permitted to access, and how their activity must be documented. These policies establish the boundaries that security teams enforce through monitoring tools, logging systems, and runtime security controls.
The introduction of verifiable reasoning systems such as Glass Box architecture attempts to address a common problem during security investigations. When an incident occurs, security teams often cannot explain why a model generated a particular output or why a system accessed a certain resource. Without traceable reasoning, post‑incident reviews become difficult and accountability weakens.
Even with improved verification, several governance gaps remain. Many organizations still lack a complete inventory of AI systems deployed across departments, which makes consistent oversight difficult. Security visibility can also become limited when employees connect external AI services or third‑party models to internal data sources. In addition, verification tools can document system behavior, but human review is still required to interpret activity and determine whether a security or compliance risk occurred.
Our Take
AI Security Take
Enterprise AI adoption is moving faster than the security architecture designed to supervise it. Organizations are deploying assistants, copilots, and automated systems that interact directly with internal data, applications, and operational workflows. As those systems gain authority to retrieve information and trigger actions, security teams must understand not only what the system produced but how it reached that outcome.
The concept behind Glass Box architecture reflects a broader shift in enterprise security thinking. Security teams increasingly need AI systems capable of explaining their reasoning and linking conclusions to the evidence that produced them. Industry leaders have begun describing the opposite condition as “security slop,” where AI tools generate outputs that cannot be traced back to evidence or data sources.
Verification systems address this problem by linking AI outputs to the telemetry, logs, and data interactions that produced them. When an incident occurs, security teams can follow the chain of evidence behind the decision instead of relying on opaque model behavior.
The organizations that manage AI risk effectively will not necessarily be the ones deploying the most AI. They will be the ones that can explain what their systems did, which systems were involved, and why those actions occurred when something goes wrong.