Most AI governance products enter the picture after the model has already done the important part. They filter outputs, flag responses, or add controls after the decision path is already underway. A growing number of builders have started from a different premise, and it changes the whole architecture. If governance only arrives after the model thinks, then governance is already late.
Crew 42 is a coalition of 24 independent builders working toward a shared AI and infrastructure ecosystem. It is not a single company and it is not a unified product suite. It is closer to a coordination layer, where separate efforts are being developed with the expectation that, over time, they can work together as a system.
The four founders featured here are building very different things in very different contexts. Still, they have all arrived at the same conclusion from different directions. The governance problem in AI does not begin with compliance reporting or documentation. It begins inside the architecture itself.
Steven Stobo / WeRAI
Steven Stobo is the founder of WeRAI AI Integration Inc., with three decades spent building and maintaining infrastructure systems where failure is not theoretical. That background shows up directly in what he is building now. WeRAI introduces a pre semantic layer that shapes the space an AI model operates in before any processing begins. If an input does not meet defined constraints, the system does not proceed. No tokens are generated, no data leaves the boundary, and no compute is consumed. Governance is enforced before output exists, not interpreted after the fact.
Before: an AI that acts and they hope it’s right.
After: an AI that asks and they know it’s governed. The difference between hope and proof.
— Steven Stobo, WeRAI
This timing matters because regulatory direction is already moving toward intervention at the point of execution. The EU AI Act requires mechanisms that can interrupt or stop system behavior under defined conditions. Most existing platforms operate after generation, while WeRAI operates before generation begins, which aligns directly with how that requirement is written.
Stobo’s path to this approach comes from a simple realization. If a system can access and act on your data without first passing through a controlled boundary, then governance was never actually present, it was only recorded.
Jon Gartmann / X-Loop³ Labs
Jon Gartmann is the founder of X-Loop³ Labs, working out of Switzerland with a focus on AI system control and governance architecture. His work sits at the intersection of EU AI Act requirements and large model behavior, with multiple patents pending around how systems can be constrained at the point where decisions form rather than where outputs are reviewed.
X-Loop³ approaches the same problem from a different technical direction, still centered on a pre semantic control layer. The system shapes how tokens are generated before they are ever produced, reducing unnecessary computation, cutting token cost significantly, and preventing hallucination from forming in the first place. Each decision is paired with a SHA-256 proof, which Aegis Tower then translates into audit ready evidence tied directly to system behavior. Governance is embedded into execution rather than attached afterward.
Before: hope and lawyers.
After: audit ready by design. You sleep, we prove.
— Jon Gartmann, X-Loop³ Labs
The key idea behind this work reframes hallucination entirely. It is not treated as a data quality issue or a prompt issue. It is treated as a structural property of how the model navigates its internal space. If the model can move away from the correct path during generation, checking the output afterward does not address the cause. The structure has to be constrained before the model enters it.
Gartmann and Stobo arrived at this conclusion independently, in different environments, which is worth paying attention to because it suggests the direction is not isolated, it is emerging.
Gerald Trucker G Johnson
Gerald Trucker G Johnson comes into this from a completely different environment, though the constraint he is solving for is the same. After thirty three years and four million miles as an over the road trucker, instructor, and owner operator, his perspective on system failure is grounded in situations where mistakes carry immediate consequences. That experience now shapes how he approaches AI governance.
The system he is building keeps a human operator inside the loop at the point where decisions turn into actions. Every step is logged, every decision is traceable, and every tool operates within boundaries defined by the operator rather than assumed by the system. The focus is not on reviewing behavior later. It is on making sure the moment where an action is taken is controlled, visible, and accountable.
That life teaches you one thing; if the system fails, people die. You build redundancy, you follow protocol, you never let the machine make the call without the operator.
— Gerald Trucker G Johnson
His work also extends into education, introducing these constraints early so that users learn how to interact with AI systems within defined limits before unsafe patterns develop. The central issue he identifies is the binding event, the moment a system moves from output into action, because that is where risk materializes if no one is watching.
Jeff Sanders / ArcAI Systems
Jeff Sanders is the owner and founder of ArcAI Systems, which operates across three connected layers that are designed to function as one governed system rather than separate tools stitched together later. His work centers on ordering the system correctly from the start, so governance is established before intelligence is allowed to operate.
ArcAI Systems is structured around LegacyCore, which defines the governing rules and constraints, ArcAI OS, which enforces those rules at the operating system level, and BioArc, which carries those constraints into the physical execution layer. The result is a system where behavior remains consistent because every layer is operating within the same defined boundaries, instead of relying on checks that happen after decisions are made.
Everyone else is building upside down. We built the governance first. That order is the breakthrough.
— Jeff Sanders, ArcAI Systems
What ties Sanders to the other founders is not the specific implementation, but the sequence they all chose. None of these systems treat governance as something to be added after the model is running. Each one places it at the beginning, where it shapes how the system behaves before any output or action can occur.
Our Take
The current AI governance market formed around compliance requirements because that was the entry point available to the first vendors who brought products to enterprise buyers. Those vendors already operated inside GRC environments, so governance was introduced as documentation, workflows, and approval systems that could fit into existing procurement structures. Over time, that definition carried forward, and organizations adopted governance platforms that could demonstrate review and control on paper while remaining disconnected from what systems actually do once deployed.
Regulatory frameworks are now moving in a direction that requires that connection to exist in practice. The NIST AI RMF places responsibility across the full lifecycle and requires ongoing measurement of system behavior, not a single evaluation moment. The EU AI Act requires continuous risk management and post deployment monitoring, which means providers must track how systems perform after they are in use. ISO 42001 defines governance as a continuous process that includes monitoring, review, and adjustment. These requirements all point toward the same operational need, which is visibility into live system behavior rather than reliance on records created earlier.
What remains unresolved is how that visibility extends across environments where systems interact with external tools, third party models, and distributed infrastructure. Interoperability is still limited, and systems introduced outside formal governance processes remain outside any monitoring framework entirely. Organizations are also still determining how to structure their architecture while the market continues to consolidate around capabilities that connect directly to execution.
Enterprises are deploying AI systems today while governance remains largely centered on documentation that does not observe runtime behavior. The gap between regulatory requirement and operational capability is measurable, and it continues to widen as deployment expands. GAIG tracks the vendors building governance infrastructure that connects directly to deployed systems and produces evidence from actual behavior. The marketplace at GetAIGovernance.net organizes those platforms so teams can evaluate which ones close that visibility gap based on how systems operate in production.