On March 12, 2026, security startup Onyx Security announced its launch with a $40 million seed round led by Conviction and Cyberstarts. The company was founded by Bar Kogan and Gal Karmi, both former Unit 8200 engineers, and operates from Tel Aviv and San Francisco. A day earlier, Kai launched from San Jose with $125 million in funding led by Evolution Equity Partners. Kai was founded by Galina Antova, co‑founder of Claroty, and Damiano Bolzoni. Both companies launched platforms focused on securing enterprise environments where AI agents operate.
Enterprise AI agents are quickly moving from experiments into real business systems. These agents search internal data, trigger workflows, modify records, and interact with sensitive systems. Traditional security tools were built to monitor human users and predictable software behavior. They were not designed to supervise autonomous systems that reason dynamically and act on their own.
Both Onyx and Kai were built to address that gap. Onyx focuses on controlling what AI agents are allowed to do before actions are executed. Kai focuses on using AI agents to perform security operations tasks such as threat analysis, vulnerability assessment, and automated remediation. Each company approaches the same problem from a different direction.
Two AI security startups raising more than $40 million and launching within the same 48‑hour window suggests growing investor and enterprise attention around the risks introduced by autonomous AI systems. This article examines why these companies emerged now, what each platform actually delivers, and what their arrival signals about the future of enterprise AI security.
Key Terms
AI Control Plane
A system that monitors and governs what AI agents are allowed to do inside enterprise environments. It can observe agent activity, enforce rules, and block actions that violate policies.
Agentic AI
AI systems that can plan, retrieve information, and execute tasks across software systems with limited human input.
Exposure Management
A security practice that identifies weaknesses across systems, applications, and infrastructure so organizations can reduce the risk of attacks.
Detection Engineering
The process of designing and maintaining detection rules that help security teams identify suspicious or malicious activity.
Remediation Automation
Technology that automatically fixes security problems such as vulnerabilities or misconfigurations once they are detected.
Prompt Injection
A type of attack where malicious input is designed to manipulate an AI system into ignoring its intended rules or producing harmful outputs.
Reasoning Transparency
The ability to see and understand the steps an AI system used to reach a decision or perform an action.
Who Onyx and Kai Are, What Drives Both Launches, and What Each Platform Delivers
Why These Two Companies Are Being Analyzed Together
Two purpose-built agentic AI security platforms launching from stealth within the same 48-hour window is a market signal. It points to a growing consensus that existing enterprise security tooling cannot properly govern AI agents operating inside production systems. Security teams can monitor access, logs, and activity, but they still lack purpose-built systems for supervising how AI agents reason, what actions they are about to take, and whether those actions stay within approved boundaries.
Onyx Security; Company Profile
Founded: 2024 | Headquarters: Tel Aviv, Israel / San Francisco, California
Funding: $40M seed round, led by Conviction and Cyberstarts, announced March 12, 2026
Co-founders: Bar Kogan (CEO) and Gal Karmi (CTO), both Unit 8200 veterans
Product category: AI agent security, real-time AI control plane
Core capabilities: AI agent discovery across enterprise environments, step-by-step reasoning transparency, real-time action approval, blocking, and correction, prompt injection detection, and least-privilege enforcement for agent permissions
Target buyer: Security and infrastructure teams governing AI agent deployments inside enterprise systems
Current status: Generally available and working with Fortune 500 customers at launch
What the Platform Actually Delivers
Onyx gives enterprises visibility into AI agent behavior that most current security tools do not provide. Security teams can discover where AI agents are operating, see the steps agents take while completing tasks, and understand what systems or data those agents are trying to access. According to CEO Bar Kogan, organizations are beginning to realize that "AI agents are acting inside business systems but security teams still cannot see or control what those agents are doing." The platform is designed to close that visibility gap.
The system also allows organizations to control agent actions before they happen. If an agent attempts to change a record, access a sensitive system, or trigger a workflow outside its allowed scope, teams can approve, block, or correct that action in real time. Once an autonomous system performs an action inside a production environment, the impact can spread quickly across connected systems, which makes pre‑execution controls important.
Onyx also focuses heavily on prompt injection risk, which has become one of the most common attack paths against AI agents. Malicious instructions can trick agents into revealing sensitive data or performing unintended actions. By identifying and filtering these inputs, the platform aims to prevent agents from following instructions that were never intended by their operators.
Every enterprise is becoming an agent operator — whether they planned to or not. The safe adoption of AI agents requires security from attacks, as well as ensuring agents don't make critical mistakes.
-- Bar Kogan CEO and Co-Founder of Onyx Security
The practical effect of deploying Onyx is a change in how enterprise security teams supervise automated systems. Instead of reviewing logs after an action has already occurred, teams gain a control layer that allows them to observe reasoning steps, review proposed actions, and intervene before execution. The platform is designed to fit into the workflows of security and infrastructure teams responsible for overseeing AI systems that interact with enterprise data and applications.
Kai; Company Profile
Founded: 2025 | Headquarters: San Jose, California
Funding: $125M, led by Evolution Equity Partners, announced March 11, 2026
Co-founders: Galina Antova (CEO), co-founder of Claroty and former Chief Business Development Officer, and Damiano Bolzoni (CTO)
Product category: Agentic AI security operations, unified detection and response
Core capabilities: Automated threat intelligence ingestion and correlation, agentic vulnerability analysis and false positive elimination, autonomous remediation deployment, unified exposure management and asset context, and detection engineering automation across fragmented SOC tooling
Target buyer: Security operations teams replacing fragmented SIEM, threat intelligence, and vulnerability management workflows with an integrated agentic platform
Current status: Platform in deployment with early enterprise customers at launch
What the Platform Actually Delivers
Kai is built to reduce the manual handoffs that slow down security operations teams. Many SOCs still move between separate tools for vulnerability scanning, threat intelligence, exposure management, and remediation. Products such as Qualys, Rapid7, and Tenable can provide pieces of that workflow, but analysts still spend time stitching together findings, confirming what matters, and deciding what should happen next. Kai’s platform is designed to collapse those steps into one operating layer.
CEO Galina Antova described the goal of the platform as shortening the time between identifying risk and fixing it. In interviews about the launch, she explained that tasks which once took weeks or months to move from detection to remediation can now be completed in hours when AI agents analyze security findings, remove false positives, correlate risk signals, and initiate remediation steps automatically.
That full chain of execution, getting to remediation, instead of something that was months and many team handoffs now becomes a matter of an hour, a couple of hours
-- Galina Antova CEO and Co-founder
Kai’s model still leaves an important role for human teams. Machines handle large‑scale data processing, pattern recognition, and initial triage. Security teams remain responsible for strategic prioritization, oversight, and defining how much authority automated systems should have inside production security workflows.
Both companies are built on the same core assumption: agentic AI security requires purpose-built infrastructure rather than small extensions of legacy tools. The difference is where each company places the agent. Onyx focuses on supervising AI agents operating inside business systems. Kai focuses on using AI agents to perform the work of security operations. An enterprise could reasonably adopt both approaches at the same time because they address different parts of the security workflow.
Conditions Driving the Rise of Agentic AI Security Platforms
Enterprise adoption of AI agents is expanding faster than the security systems designed to supervise them. Organizations are moving AI from internal experiments into production workflows that touch sensitive data, financial systems, customer records, and infrastructure controls. As that shift accelerates, security teams are discovering that many existing tools were designed for human users and predictable software behavior, not autonomous systems that plan and execute actions on their own.
Several structural pressures are pushing companies to build new categories of security platforms for AI agents:
Autonomous systems now operate inside critical business systems. AI agents increasingly interact with databases, SaaS applications, infrastructure tools, and internal APIs. Once these systems can execute actions rather than just generate text, the security impact becomes operational rather than theoretical.
Traditional monitoring tools lack visibility into agent reasoning. Security teams can see activity logs, API calls, or system access, but they often cannot see why an AI agent decided to perform a specific action. Without reasoning visibility, teams struggle to determine whether behavior is legitimate or manipulated.
Prompt injection and agent manipulation attacks are emerging quickly. Researchers and red‑team groups have demonstrated that malicious instructions can cause AI agents to expose sensitive information or perform unintended operations. These attacks target the decision process of the AI system rather than the infrastructure around it.
Security operations teams already face tool fragmentation. Many organizations run separate platforms for vulnerability management, threat intelligence, detection engineering, and remediation. AI‑driven security platforms promise to reduce that fragmentation by automating parts of the analysis and response workflow.
Investors and enterprises expect AI adoption to accelerate. Funding rounds for companies such as Onyx and Kai reflect a belief that autonomous AI systems will soon operate across enterprise software environments. If that assumption holds, security controls designed specifically for AI agents will become necessary infrastructure.
How Enterprises Currently Secure AI Agents
Most enterprises deploying AI systems today still rely on security controls that were originally designed for traditional software applications. Identity management, API monitoring, network controls, and logging systems help organizations track what systems are doing, but they were not built to supervise the reasoning processes of AI agents.
When an AI agent performs an action inside an enterprise environment, security teams usually see the result of that action rather than the decision process that produced it. For example, logs may show that an agent accessed a database or triggered a workflow. What those logs often do not show is the sequence of reasoning steps the agent used to reach that decision. That gap makes it difficult to determine whether the behavior was legitimate, manipulated, or simply mistaken.
Another challenge is operational speed; AI agents can perform tasks across multiple systems in seconds. Human security teams reviewing logs or alerts after the fact may not be able to intervene quickly enough to stop unintended actions from spreading through connected systems.
Because of these limitations, many organizations currently rely on restrictive deployment strategies. Security teams limit what systems AI agents can access, restrict their permissions, or keep them confined to non‑critical environments. Platforms such as Onyx and Kai are attempting to remove those constraints by giving enterprises a way to supervise or automate security decisions around AI activity in real time.
Our Take
AI Security Take
The nearly simultaneous launches of Onyx and Kai signal that agentic AI security is becoming its own category rather than a feature inside existing cybersecurity platforms. Enterprises are already running autonomous systems that search data, modify records, trigger workflows, and interact with infrastructure. Once software can make decisions and act inside business systems, the security challenge shifts from simply protecting infrastructure to supervising how those autonomous systems behave. Platforms designed specifically for AI agents are emerging because traditional security layers were never designed for that responsibility.
Even with these new platforms, important risks remain. AI agents can still be manipulated through prompt injection, data poisoning, or unexpected interactions between systems. Visibility into reasoning steps helps security teams understand agent behavior, but it does not remove the need for strong access controls, careful system design, and continuous monitoring. Enterprises deploying AI agents still need layered defenses that combine infrastructure security, application security, and governance controls.
Organizations evaluating platforms such as Onyx or Kai should focus on operational integration rather than feature lists. Security teams need to understand how these tools fit into existing workflows, whether they integrate with current monitoring and identity systems, and how much authority automated security agents should have in production environments. Governance teams should also examine auditability, logging, and policy enforcement against established frameworks such as the NIST AI Risk Management Framework (NIST AI RMF) and regulatory expectations emerging from the EU AI Act to ensure AI‑driven actions remain traceable, reviewable, and compliant.
The accountability gap these platforms address already exists. Enterprises are deploying AI agents that interact with production data and business systems today, while the security and governance layers responsible for supervising those systems are still catching up. Onyx and Kai represent early attempts to close that gap by giving security teams visibility into agent behavior and tools to intervene before autonomous actions create operational risk.
Security leaders evaluating emerging agentic AI security platforms can explore vendors, capabilities, and governance tooling inside the GAIG Marketplace, where enterprise teams can compare solutions designed to secure, monitor, and govern AI systems operating in production environments.