Threat Detection

SentinelOne®’s AI EDR Stops the Axios Attack Autonomously

A compromised Axios update spread in just 89 seconds, not through a break in security, but through systems operating exactly as they were designed. This incident exposes a deeper issue in AI supply chains, where trusted workflows, automated agents, and inherited permissions allow attacks to move at machine speed. Here is what actually happened, how it was stopped in real time, and why most governance strategies still miss the moment where control matters most.

Updated on April 05, 2026
SentinelOne®’s AI EDR Stops the Axios Attack Autonomously

At 00:21 UTC, a compromised Axios package was published, and within 89 seconds the first confirmed infection had already taken hold across a production environment. There was no phishing email, no user error, and no developer manually pulling a suspicious dependency under pressure, which is precisely why this incident changes how the event should be interpreted. In at least one confirmed case, the installation was triggered by an AI coding agent executing routine update behavior inside a trusted workflow, which means the initial entry point followed a path that most organizations would classify as normal system operation rather than anomalous activity.

Framing this as a breach understates what actually occurred because the system did not fail in the conventional sense of being bypassed or exploited. The sequence unfolded exactly along the lines of what the environment allowed, with execution moving immediately from package publication to installation without any intermediate checkpoint that required validation, delay, or human oversight. When execution operates at machine speed and validation is either absent or deferred, the difference between expected behavior and harmful outcome collapses into a single continuous process that is difficult to interrupt once it begins.

The system acted within its permissions, which is what makes this a governance failure rather than a detection failure or a tooling gap. This article reconstructs the attack chain in detail, explains why it propagated as quickly as it did, and outlines what it reveals about AI supply chain governance, particularly the gap between what organizations approve at the policy level and what their systems are actually allowed to execute in practice.

Anatomy of a Machine-Speed Supply Chain Attack

How did the attack happen in the first place? It didn’t appear out of nowhere. The actor first compromised upstream systems and credentials, positioning themselves deep inside the software supply chain long before they ever touched Axios. By the time the package became the target, the attacker was simply executing the final distribution step using infrastructure that developers already trusted without question.

The hijack of the maintainer’s npm token removed any need for a separate exploit against Axios itself. Once the malicious versions were published, every system configured to accept updates from that maintainer inherited the compromise automatically. The insertion of plain-crypto-js@4.2.1 as a dependency ensured the payload would execute during installation, so the moment the package was pulled the attack chain was already moving forward.

The speed of propagation came from the structure itself. Axios sits inside a large percentage of modern JavaScript environments, which means distribution was built into the target. Within a three-hour window, hundreds of thousands of systems pulled the compromised versions because they were functioning exactly as designed inside a trusted update ecosystem.

The Agent Permission Problem

How does an attack like this reach production without anyone noticing? At least one confirmed infection path happened without any direct human involvement. An AI coding agent, operating inside a normal development workflow, executed an automatic dependency update and pulled the compromised package as part of routine system behavior. The action required no escalation, no approval, and no additional awareness because it was already permitted within the environment.

This exposes a structural issue most organizations have not explicitly addressed. AI agents are often deployed with the same permissions as developers, including access to package managers, registries, and execution environments. They operate without the contextual judgment that typically governs how those permissions are used. The system assumes that if an action is allowed, it is also appropriate.

The agent did not bypass security controls. It followed the exact path it was authorized to follow. When authority is granted without constraints on how and when it should be exercised, execution becomes automatic and continuous. In that context, the attack only needs to move through the permissions that are already in place.

Why It Spread

The attack moved as quickly as it did because it followed paths that systems are explicitly designed to trust. Installation behavior such as npm install runs continuously across development environments, CI pipelines, and automated workflows, which means the action itself does not signal risk. Nothing about pulling a dependency from a known maintainer appears unusual, especially when the source has already been validated historically. That creates a condition where trust is not evaluated at the moment of execution but inherited from prior assumptions.

This raises a practical question about where trust actually gets checked. In most environments, it is verified once at the identity level and then extended indefinitely across every subsequent action. When the maintainer account was compromised, that trust propagated automatically from identity to package, from package to dependency, and from dependency to every system consuming it. No part of the process required revalidation after the compromise occurred.

The payload design reinforced this structure by minimizing visibility after execution, removing artifacts and restoring clean metadata so that traditional indicators of compromise were limited. The result is a system where expected behavior becomes the delivery mechanism, and where speed combined with inherited trust creates conditions that are difficult to interrupt once execution begins.

AI Supply Chain Governance Is Missing the Execution Layer

Where does governance actually apply once systems begin executing actions at machine speed? Most organizations point to policy frameworks, model approvals, and evaluation processes, since those define what is allowed inside the environment. Those mechanisms operate before execution begins, shaping decisions in advance, while the systems themselves continue forward once permissions are granted.

The axios incident shows what happens when execution carries those permissions forward without interruption. No model produced an unsafe output and no alert was triggered by unusual behavior. The system continued along an approved path, moving from package retrieval to installation without any requirement to pause, reassess, or validate conditions at runtime. Execution remained aligned with what had already been authorized, even as the underlying conditions had changed.

That creates a structural exposure centered on how actions are carried out after approval. AI supply chain governance, when fully scoped, extends into the execution environment, defining how dependencies are introduced, when updates proceed, and how trust is evaluated continuously rather than assumed. Control at this level shapes behavior while systems are actively operating, rather than relying on decisions made earlier in the process.

Organizations that concentrate governance at the approval stage maintain clarity over what is permitted, while execution continues to operate with the same authority once actions begin. The result is a consistent pathway from permission to execution that does not require interruption, even when new risk enters the system.

How the Attack Was Stopped in Real Time

The moment the compromised Axios package entered the environment, everything started moving automatically. Think of it like pressing a single button that triggers a chain reaction. The install command kicked things off, and right after that, the package ran a script in the background. That script began creating new processes, which are basically new tasks the system starts running on its own. Under normal conditions, installing a package should stay within a small, predictable set of actions, like unpacking files and setting things up. Here, the system started doing extra things that didn’t match that pattern.

This detection and response came from SentinelOne’s Singularity platform, specifically its endpoint detection and response capability, which means it watches what programs actually do on a machine while they are running. It does not rely on names or labels alone. It focuses on behavior, which is why it was able to follow this sequence even though the package looked legitimate at the start. Features like Purple AI support investigation and analysis for security teams, though the stopping action here came from the real-time behavioral engine that operates during execution.

SentinelOne followed that chain as it was happening. Instead of looking at each action separately, it treated the whole sequence like one story. The install led to a script, the script led to new processes, and those processes started doing more than they should. Imagine watching someone walk through a building where they are allowed to enter the front door, but then they suddenly start opening restricted rooms. Each step might look normal on its own, but together it tells you something is off.

The next step made it clearer. The system tried to reach out to an external server, which is like a computer calling home to receive instructions. This is how attackers usually take control after getting inside. SentinelOne saw that this call was coming from the same chain that started during installation, so it understood that all of these actions were connected.

At that point, the system stepped in. It stopped the processes that were running the malicious code and blocked the outgoing connection before it could finish. That cut the chain in the middle while everything was still happening. No waiting, no review later, just stopping it right there.

This all happened in the same short window where the attack was active. When actions move this fast, the only way to stop them is to act while they are still happening, not after everything is already done.

Our Take

If you were responsible for this system, where would you have stepped in? Not after the install finished, because by then the code is already running. Not at approval, because everything that happened was already allowed. The pressure sits in the moment where the system is actually doing something, and that moment is where most teams currently have the least control.

The axios incident shows what happens when execution is fast and governance is slow. Updates move in seconds, agents act instantly, and environments trust what they already approved. That combination creates a path where outcomes are decided before anyone has time to react. The system follows its permissions all the way through, and by the time something looks wrong, the sequence has already finished.

This is where most organizations are exposed today. They have visibility, they have policies, and they have review processes, but those controls live outside the moment where actions are happening. The result is a gap between what a team thinks is controlled and what the system is actually allowed to do when it starts moving.

If your environment can install, execute, and reach external systems in under a minute, then control has to exist inside that same window. Anything that operates slower than the system it is trying to govern will always arrive after the outcome is already decided.

If you are reading this and thinking this would be difficult to stop in your own environment, that is exactly the point where most teams are today. The next step is not adding more tools randomly, it is understanding which vendors can actually control execution in real time for your specific setup. That depends on your stack, your workflows, and how your systems are currently allowed to behave.

If you want to close that gap, send in an inquiry through GetAIGovernance. We match you with the vendors that actually fit your environment, whether you are running AI agents, CI pipelines, or production systems that cannot afford this type of failure. Instead of guessing which platform works, you get a short list built around your exact use case so you can move faster without introducing new risk.

Related Articles

AI Governance Platforms vs Monitoring vs Security vs Compliance Governance Platforms

Mar 1, 2026

AI Governance Platforms vs Monitoring vs Security vs Compliance

Read More
ServiceNow Introduces the Enterprise Identity Control Plane Following Its Acquisition of Veza Risk Controls

Mar 2, 2026

ServiceNow Introduces the Enterprise Identity Control Plane Following Its Acquisition of Veza

Read More
Pleneo and OneAdvanced announced that they have both achieved ISO 42001 certification Risk Controls

Mar 3, 2026

Pleneo and OneAdvanced announced that they have both achieved ISO 42001 certification

Read More

Stay ahead of Industry Trends with our Newsletter

Get expert insights, regulatory updates, and best practices delivered to your inbox