AI Governance Reports

Work AI Institute Publishes "The New Rules of AI Security" Introducing the AWARE Framework

The Work AI Institute has released The New Rules of AI Security, introducing the AWARE framework, a governance model designed to supervise AI agents operating inside enterprise systems. Co-authored by leaders from Palo Alto Networks, Databricks, and Glean, the report outlines how organizations must adapt security models as autonomous AI systems begin executing actions across production environments.

Updated on March 12, 2026
Work AI Institute Publishes "The New Rules of AI Security" Introducing the AWARE Framework

The Work AI Institute recently released a new report titled The New Rules of AI Security, co‑authored with security and AI leaders from Palo Alto Networks, Databricks, and Glean. At the center of the report is the AWARE framework, a model designed to help enterprises govern AI agents operating inside production systems. The report argues that as organizations deploy AI agents across internal tools, data systems, and infrastructure, security teams must shift from protecting software systems to supervising autonomous behavior.

The report arrives at a moment when AI agents are already operating in real enterprise environments. Unit 42 simulations referenced in the report show that some intrusions can now reach data exfiltration in under an hour. Traditional security tools were designed for human users and predictable software behavior, leaving a governance gap when autonomous systems begin executing tasks across multiple enterprise systems.

When competing vendors collaborate on a shared governance framework, it usually signals something important about the maturity of a market. Companies such as Palo Alto Networks, Databricks, and Glean all sell different parts of the enterprise AI stack, yet they agreed on a common behavioral governance model. When competitors align around a shared standard, it often means the category is about to consolidate or face regulatory pressure.

The market context makes the timing significant. According to the report, 95% of enterprise leaders say they are investing in AI, yet only 34% report having a governance framework implemented at scale. The AWARE framework represents one of the most operationally detailed attempts so far to close that gap, which is why GAIG is covering the report in detail. The framework maps directly to several governance and security tooling categories currently tracked inside the GAIG marketplace.

Key Terms

AI Agent

An AI system that can autonomously perform tasks such as retrieving data, triggering workflows, or interacting with software systems with minimal human supervision.

Agentic AI

AI systems capable of planning, reasoning, and executing multi‑step actions across software environments.

AWARE Framework

A governance model introduced in the report that outlines behavioral controls for supervising AI agents and managing their actions inside enterprise systems.

Prompt Injection

A type of attack that manipulates AI systems by inserting malicious instructions designed to override their intended rules.

RAG (Retrieval‑Augmented Generation)

An AI architecture where models retrieve external data sources in real time to improve responses or decision‑making.

Shadow AI

AI tools or agents deployed by employees or teams without formal oversight from enterprise IT or governance teams.

Non‑Human Identity

Machine identities used by AI agents, services, or automated systems when interacting with enterprise software.

Behavioral Governance

A governance approach focused on supervising how AI systems behave and what actions they perform rather than only controlling system access.

Key Findings

  • The average breach detection time remains around 290 days for AI‑related breaches compared with 207 days for traditional breaches, reflecting the difficulty of detecting attacks involving autonomous systems.

  • 95% of enterprise leaders report investing in AI, while only 34% say they have implemented governance frameworks at scale.

  • Unit 42 simulations show some AI‑driven attack chains reaching data exfiltration in as little as 25 minutes.

  • Only 17% of organizations have implemented automated technical controls governing how AI systems interact with enterprise data.

  • Enterprises must govern four categories of AI agents: first‑party, second‑party, third‑party, and shadow agents.

  • The report introduces the AWARE framework, built around five governance pillars: Authenticate, Watch, Audit, Respond, and Enforce.

  • A proposed five‑layer AI security architecture outlines how organizations should structure monitoring, control, and governance layers around AI systems.

  • Organizations with strong governance controls deploy 12× more AI projects into production environments.

  • 31% of S&P 500 boards disclosed formal AI oversight in 2024, reflecting growing executive involvement in AI governance.

What the Report Covers

Part 1 — Why AI Breaks Security Assumptions

The report begins by examining how AI systems expand the enterprise attack surface. AI agents create a rapid increase in non‑human identities interacting with enterprise infrastructure. Traditional identity and access management models struggle with this shift, particularly across authentication, authorization, and auditability. The section also defines the four major agent categories organizations must govern: first‑party agents developed internally, second‑party agents built by partners, third‑party agents embedded in vendor software, and shadow agents deployed without formal approval.

Part 2 — The AWARE Framework

The second section introduces the AWARE framework and explains how each pillar focuses on governing AI behavior rather than simply controlling access. The report argues that identity‑based controls alone cannot supervise autonomous systems that plan and execute tasks independently. Behavioral governance layers must instead evaluate how agents reason and act.

Part 3 — Governing Real‑World Threats

The report identifies several emerging AI‑specific threat patterns enterprises must prepare for. These include prompt injection, RAG leakage, agent chaining, silent drift, and embedding leakage. Each of these threats targets how AI systems reason, retrieve information, or coordinate actions across enterprise systems.

Part 4 — Operationalizing AWARE Through Architecture

The report then introduces a five‑layer security stack designed to operationalize AI governance. This architecture includes monitoring layers, policy enforcement layers, governance planes, and response mechanisms. It also outlines a phased implementation roadmap that organizations can follow when building AI governance capabilities.

Part 5 — The New Role of the AI Security Leader

The final section focuses on the evolving responsibilities of security leadership. CISOs are expected to oversee AI governance programs, coordinate with engineering teams deploying AI systems, and communicate AI risk management strategies to executive leadership and boards.

Readers interested in reviewing the full framework and implementation guidance can download the complete report from the Work AI Institute.

Our Take

AI Security Take

This report signals a structural shift in the AI security market. Vendors that normally compete across infrastructure, security tooling, and AI platforms collaborated on a shared governance model. When competitors align on standards like this, it usually indicates the category is about to face regulatory scrutiny or rapid consolidation.

The AWARE framework moves in the right direction by focusing on behavioral governance rather than static access control. This direction aligns closely with emerging guidance such as the NIST AI Risk Management Framework and regulatory efforts like the EU AI Act, both of which emphasize oversight of AI system behavior rather than simply controlling who can access a system.

However, several unresolved challenges remain. Enterprises still face difficult questions around accountability for third‑party AI systems, interoperability between governance tools from different vendors, and the scale of shadow AI deployments across large organizations.

The reality is that enterprises are already deploying AI agents in production environments today. The governance infrastructure responsible for supervising those systems is still forming. As adoption accelerates, platforms focused on AI governance, monitoring, compliance, and security will become essential layers of the enterprise AI stack. Explore the emerging vendors building those systems inside the GAIG Marketplace.

Related Articles

Stay ahead of Industry Trends with our Newsletter

Get expert insights, regulatory updates, and best practices delivered to your inbox