Amy Mushahwar and Tricia Wagner, privacy and cybersecurity attorneys at Lowenstein Sandler with more than two decades of incident response experience each, published a piece on JD Supra that lands harder than most vendor white papers on this topic. Their argument is direct: AI governance that exists only in documents cannot scale, and most organizations have the first while almost none can prove they have the second. The specific line that earns the piece its authority is their statement that “Policies promise governance. Pipelines prove it.” That sentence is the clearest distillation of the compliance theater problem that has appeared in any legal or enterprise publication this year.
Mushahwar and Wagner walked the Legalweek vendor floor specifically looking for the infrastructure that would make AI governance real — model inventory systems, runtime monitoring, data lineage, identity governance for AI agents, and AI attack surface visibility. What they found instead was contract review platforms, AI-powered legal research, e-discovery analytics, and workflow automation. Then they named what was absent. That observation method — walking the floor and documenting the gap between governance vocabulary and governance infrastructure — is the kind of primary fieldwork that most editorial coverage of this topic skips. They draw an analogy that this split appears at HIMSS in healthcare, at financial services innovation conferences, and at manufacturing events. It is not a legal profession problem. It is a structural reality of how technology adoption works.
This tells us that two EXPERIENCED practitioners who understand what evidence looks like in a regulatory or litigation context have now independently arrived at the same conclusion that the compliance theater framework describes — governance is invisible until it fails and becomes very visible. The eight-domain infrastructure framework they outline is not a finished product recommendation. It is the most honest map of what enterprise AI governance actually requires that has appeared in legal publishing this cycle.
The Core Observation
At one booth with “Responsible AI” branding, one of the authors asked how the platform enforced the data-handling policies it was marketing. The representative pulled up a read-only PDF of an AI ethics charter and a static log of user logins. When asked where the enforcement controls lived, there was a long pause. That single interaction describes the compliance theater dynamic more precisely than any framework diagram. A digital filing cabinet presented as a governance system. The gap between the branding and the capability is the problem every enterprise buyer faces during procurement — everything looks organized until you ask the enforcement question.
The pattern across industries matters because it removes the easy explanation that this is a legal profession problem or a maturity gap in a single sector. Productivity tools are visible and measurable. Governance infrastructure is invisible until it fails. That asymmetry is structural, and it shapes what gets bought, what gets demonstrated, and what gets left out of the procurement conversation across every industry where AI is being deployed at scale.
The Eight-Domain Framework
The eight domains Mushahwar and Wagner outline represent the full infrastructure stack that produces verifiable proof that governance is working. These domains include governance and risk orchestration, AI discovery and security posture management, agent orchestration and workflow control, data security posture management for AI, data lineage and pipeline visibility, identity and access governance, runtime protection and behavioral monitoring, and AI supply chain and model integrity. The analytical point is that these are not governance platforms in the traditional documentation sense — they are the technical layer that generates the evidence regulatory and legal proceedings will eventually require. The five artifacts they identify as the evidentiary output of the framework are automated model inventory, quantitative validation logs, data provenance records, drift and performance telemetry, and machine identity audit trails. Those five artifacts are the answer to the question most organizations cannot answer today — if a regulator asked tomorrow how a specific AI system produced a decision, what would you show them.
The Five Questions
Mushahwar and Wagner say enterprise teams should bring five specific questions to their CISO this week:
Can the organization show a current AI model inventory across every deployment environment and vendor integration?
What runtime monitoring covers and does not cover?
What documentation exists to trace how a specific AI output was produced?
What systems AI agents can access and who last reviewed those authorizations?
How would the organization know if an AI vendor pushed a model update tonight that changed system behavior?
The authors note that any of those questions that produces a long silence is a starting point — and that if the silence is followed by “we have a policy for that,” the follow-up question is to ask to see the pipeline. That pivot from policy to pipeline is the enforcement question the Legalweek booth could not answer.
Explore the emerging vendors building toward the evidentiary standard Mushahwar and Wagner describe inside the GAIG Marketplace.
Our Take
AI Governance Take
Practitioner fieldwork like this matters more than vendor white papers in this category. Mushahwar and Wagner are not selling a platform. They are preparing for RSA to examine the infrastructure side of a problem they are currently being asked to advise clients on. The fact that two attorneys with incident response backgrounds independently walked a vendor floor and documented the gap between governance vocabulary and governance infrastructure is the same kind of evidence the compliance theater argument has been building from a different direction. Their “policies promise governance, pipelines prove it” framing is an evidentiary standard — the same standard a regulator or litigant will apply when something eventually fails.
What remains unresolved even in this framework is significant. Mushahwar and Wagner are direct about the status of their work — it is version 1.0, they expect it to change substantially after RSA, and the eight-domain framework represents the full governance infrastructure landscape on the horizon rather than the starting line. The cost and scale problem they name is real — for mid-market organizations without dedicated security engineering the full stack is prohibitively expensive if approached all at once. The agent orchestration category is the least settled in their framework and the one they say will evolve the most. Shadow AI is named but not solved. The same unresolved problems GAIG has identified in its compliance theater coverage persist across the practitioner fieldwork.