CoSAI, an OASIS Open Project backed by contributors from IBM, Intel, Google, Meta, Cisco, Palo Alto Networks, Anthropic, Amazon, Dell, Red Hat, ServiceNow, and PayPal, published its Agentic Identity and Access Management framework on March 20, 2026 as part of Workstream 4: Secure Design Patterns for Agentic Systems. The document was approved by the CoSAI Technical Steering Committee and addresses one of the most consequential gaps in enterprise AI security today: the absence of an identity and access control model purpose-built for autonomous agents.
The core problem the framework identifies is structural. Traditional IAM was designed around long-lived human accounts, coarse role assignments, and a trust model that authenticates once and trusts for the session. AI agents break every one of those assumptions. They are autonomous, short-lived, self-updating, and often operate as multi-tenant entities acting on behalf of many users simultaneously. Static service accounts and shared credentials cannot track what an agent did, with whose authority, or whether the model it ran matched the one that was approved. When something goes wrong, there is no reliable way to reconstruct the chain of events.
The framework does not propose rebuilding IAM from scratch. Its central argument is that existing infrastructure — identity providers, OAuth and OIDC servers, PKI, policy engines, SIEM platforms — should remain the primary control plane, extended with agent-specific semantics. Agents should be treated as first-class identities with their own lifecycle, governance, and accountability. Credentials should be short-lived and scoped to the task at hand. Every hop in a multi-agent delegation chain should be authenticated, logged, and auditable. And organizations must be able to answer, from immutable records, exactly which agents were active, what permissions they held, and what they did.
The document was produced by a working group that includes representatives from organizations that collectively deploy AI at a scale large enough to encounter these problems in production. It references NIST SP 800-63, NIST AI RMF, RFC 8693, RFC 9396, the EU AI Act, and the CoSAI MCP Security white paper, giving it a regulatory grounding that most vendor-produced guidance lacks.
Key Findings
• Traditional IAM presumes long-lived human accounts, whereas AI agents are autonomous, short-lived, self-updating, and operate as multi-tenant entities acting for many users simultaneously — making static service accounts an inadequate control model.
• Four enterprise failure scenarios are documented: an over-privileged financial agent manipulated via prompt injection to initiate fraudulent payments; a support agent that chains CRM, knowledge base, and email access to exfiltrate sensitive records; a devops agent using reused human credentials to deploy unapproved production changes; and a data analytics agent leaking prompts across tenant boundaries.
• Seven threat themes drive the majority of agentic IAM failures: over-permissioning, loss of actor clarity through shared accounts, shadow agents operating outside registration, broken delegation chains across token exchanges, unsigned or swapped model binaries, indirect prompt injection, and agent collusion through proxy chaining.
• High capability, high risk deployments — financial operations, admin and devops functions, PII processing — require the full Agentic IAM stack: ephemeral identities, OBO delegation, token exchange, ABAC and PBAC policies, continuous evaluation, and human-in-the-loop controls.
• Authentication for autonomous agents is not a one-time event. Short-lived credentials and periodic attestations must continuously emit fresh claims about the agent’s state, including platform, environment, model version, and active task.
• Scope must narrow at each hop in a multi-agent delegation chain and must never expand beyond the delegating principal’s effective permissions. Revocation of a delegation must cascade to all downstream delegations.
• All agents regardless of risk tier should be registered and monitored. The capability-impact matrix adjusts the strength and granularity of controls, not whether controls apply.
• Organizations must be able to reconstruct from immutable logs which agents existed, what they were permitted to do, what delegations they held, and what actions they performed — a requirement the framework calls “prove control on demand.”
• The framework defines three adoption phases: Phase 1 establishes visibility by discovering and registering all agents and eliminating shared accounts; Phase 2 introduces short-lived tokens and context-aware authorization for higher-risk agents; Phase 3 implements full cross-domain delegation chains, continuous evaluation, and human-in-the-loop controls for critical actions.
• Agents at autonomy level L3 and above — domain-bounded multi-step planners through fully autonomous open-ended systems — should be treated as high capability by default and governed accordingly.
What the Report Covers
Why Classic IAM Fails for Agents
The framework opens by identifying the structural mismatch between traditional IAM assumptions and agentic behavior. Human IAM systems authenticate once and trust for the session. Agents change their behavior dynamically, operate across multi-hop delegation chains, act for multiple users simultaneously, and may update their own code or model between tasks. A shared service account cannot capture any of that context, which means that when an agent causes an incident, the organization cannot reconstruct what happened, who authorized it, or which version of the model was running.
Agents as First-Class Identities
The core architectural principle is that each AI agent must receive a unique, persistent identity tied to its specific code hash, model version, toolset, and configuration at runtime. Static attributes define what the agent is. Dynamic context captures its operating environment, current task, and in-memory state. Delegation information records the principal on whose behalf it is acting and the scope of that authority. Enterprise policies can require that agents present signed model manifests at runtime; if the loaded model does not match the manifest, the agent is blocked from executing high-impact actions and a revocation event is triggered.
Authentication, Authorization, and Delegation
The framework maps authentication mechanisms to autonomy levels and risk tiers. Low-risk, low-capability agents may use narrowly scoped service accounts with aggressive rotation. Higher-risk agents should use dynamic ephemeral identities, SPIFFE SVIDs, or short-lived OAuth tokens. The highest-risk agents should use hardware-backed keys in trusted execution environments. Authorization is evaluated against four elements at every interaction: the principal, the action, the resource, and conditions including time, risk score, and data sensitivity. Every agent has two distinct permission sets: its own baseline rights and its delegated on-behalf-of rights, which are carried in OBO tokens that preserve both identities so delegation chains remain traceable.
The Invoice-Processing Agent Example
The framework includes a detailed end-to-end walkthrough of a semi-autonomous invoice processing agent. The agent receives short-lived, narrowly scoped credentials for each backend system — document store, ERP, payment API — rather than broad standing access. When a user in accounts payable initiates a session, the agent receives an OBO token carrying the user’s identity and scopes. The payment API enforces amount thresholds, approved supplier lists, and human-in-the-loop requirements for higher-value payments. Immutable logs capture every token issuance, API call, and agent-to-agent delegation with a correlation ID linking events across the full workflow. When anomalies appear — unusual payment spikes, repeated attempts to exceed thresholds — security operations can revoke credentials, narrow scopes, or disable automatic payments entirely.
Phased Transition to Agentic IAM
The framework provides a practical three-phase adoption roadmap. Phase 1 focuses on visibility: discovering and registering all agents, eliminating shared accounts, and establishing immutable action logging. Phase 2 introduces contextual access controls, replacing standing privilege for higher-risk agents with short-lived tokens and intent-aware authorization. Phase 3 implements the full Agentic IAM model, including cross-domain delegation chains, continuous evaluation, and automated discovery of new and changed agents. Each phase is cumulative, and organizations that delay Phase 1 increase their security and compliance exposure with every new agent introduced into the environment.
Our Take
AI Security Take
The CoSAI Agentic IAM framework lands at a moment when most enterprise AI security programs are still focused on the interaction layer — prompt filtering, output guardrails, browser-level controls. Those are real problems worth solving, but they sit at the surface. The identity and delegation layer sits underneath everything, and it is where the structural exposure actually lives.
An agent that passes a prompt filter but operates under a shared service account with broad standing access is still a serious liability. When it behaves unexpectedly — because it was manipulated, because its model was updated, because a user gave it a vague instruction — there is no reliable way to isolate what happened, revoke exactly the right credentials, or prove to a regulator that the organization had meaningful control. The framework published by CoSAI is the most detailed attempt yet to close that gap, and it comes from contributors who are actually building and deploying these systems at scale.
The nine principles in the document — treat agents as first-class identities, eliminate standing privilege, enforce at every hop, prove control on demand — are not aspirational. They are the minimum viable posture for any organization deploying agents with access to financial systems, personnel data, production infrastructure, or sensitive customer records. Organizations that treat agent security as a feature to be added later are accumulating technical debt that will become an incident waiting for a trigger.
GetAIGovernance tracks vendors building the identity, access, runtime enforcement, and audit infrastructure that Agentic IAM requires. Browse the AI Security category and AI Access Control at GetAIGovernance.net to evaluate platforms addressing the control gaps this framework defines.