Anthropic has launched Project Glasswing, a major new cybersecurity initiative, following the discovery of thousands of previously unknown zero-day vulnerabilities by its latest AI model. The partnership brings together Microsoft, Nvidia, Amazon, Apple, and more than 40 other technology companies, with early discussions involving the US government.
Claude’s Mythos Preview model identified critical bugs — some dating back 27 and 16 years — across every major operating system and popular browser. The findings showed the model’s ability to autonomously discover complex security flaws at massive scale. In response, Anthropic is committing $100 million in usage credits and $4 million in donations to support open-source cybersecurity efforts.
This announcement marks a significant step in using frontier AI models defensively to protect critical infrastructure at the same speed attackers operate. It highlights how quickly AI capabilities are advancing on both the offensive and defensive sides of cybersecurity. For years, security teams have warned that AI would accelerate threat discovery. Now, the industry is actively channeling that power toward defense through large-scale collaboration. The initiative reflects growing recognition that securing the shared software ecosystem requires coordinated action from the biggest players in technology. As AI models grow more capable, the gap between what attackers and defenders can achieve is narrowing rapidly. Project Glasswing represents one of the first major attempts to close that gap through structured industry partnership.
Key Terms
Project Glasswing
Anthropic’s new collaborative initiative that uses advanced AI to discover and help remediate vulnerabilities in critical software and open-source codebases.
Zero-Day Vulnerabilities
Security flaws that are unknown to vendors and have no available patches.
Mythos Preview
Anthropic’s latest AI model capable of large-scale autonomous vulnerability discovery.
AI-Driven Vulnerability Discovery
Using powerful AI models to scan codebases and identify complex security weaknesses at speeds and scale previously impossible with traditional methods.
Conditions Driving This Change
Several deep structural shifts are pushing the industry toward large-scale AI-powered defense partnerships like Project Glasswing. The convergence of advanced AI capabilities, persistent supply chain weaknesses, and rising threat velocity has made solo efforts unsustainable.
AI models have now reached a level of reasoning and code understanding that allows them to identify subtle, long-hidden vulnerabilities far faster than teams of human experts.
The software supply chain remains dangerously exposed, with thousands of exploitable bugs persisting for decades in core operating systems and widely used browsers.
Attackers are already actively using AI tools to discover and weaponize vulnerabilities, forcing defenders to match that speed and scale or risk falling behind.
No single company can secure the entire shared software ecosystem alone — the interdependencies across platforms and open-source components are simply too complex.
Regulatory and national security pressure is increasing rapidly to protect critical infrastructure from AI-augmented threats.
These pressures have created a clear moment of convergence. The technology now exists, the risks are accelerating, attackers are moving at machine speed, and collaboration has become essential rather than optional. Project Glasswing represents the industry’s first major coordinated response to this new reality.
What Security Looked Like Before
Before initiatives like Project Glasswing, vulnerability discovery depended almost entirely on manual code reviews, traditional static and dynamic analysis tools, and signature-based detection systems. Security teams spent weeks or months poring over code manually, running limited scans, and hoping their tools would catch known patterns. These approaches were slow, extremely labor-intensive, and often missed novel or deeply buried flaws that did not match existing signatures.
Many critical bugs remained undiscovered for years — sometimes even decades — because human teams could only review a fraction of the massive, interconnected codebases that power modern systems. Collaboration between companies stayed limited and mostly reactive. Organizations typically shared vulnerability information only after an exploit had already appeared in the wild. Security teams worked with fragmented tools that offered incomplete visibility across operating systems, browsers, libraries, and open-source components. The entire process felt largely reactive rather than proactive. Teams patched what they knew about while accepting a baseline level of unknown risk in foundational software that everyone relied upon daily. The pace of discovery simply could not keep up with the growing complexity and sheer volume of modern code. This left critical infrastructure exposed to threats that could remain hidden for a very long time.
What’s Changing Now
Project Glasswing is fundamentally changing how the industry approaches vulnerability discovery by bringing frontier AI models into coordinated, large-scale defensive use. Anthropic and its partners are now deploying advanced AI to proactively scan and identify vulnerabilities across critical software and open-source infrastructure at speeds and scale that were previously impossible.
The initiative gives participating organizations shared access to powerful models through $100 million in usage credits. It also directs $4 million in direct donations to strengthen open-source cybersecurity projects. Partnerships with Microsoft, Nvidia, Amazon, Apple, and dozens of other major technology companies create a collaborative defense layer capable of hunting bugs across operating systems, browsers, and foundational codebases simultaneously.
This marks a clear shift from slow, fragmented human efforts to fast, coordinated, AI-powered processes. Instead of waiting for bugs to surface through reports or active exploitation, the industry is now actively hunting them down using AI at machine speed. The partnership model allows each company to contribute resources and expertise while benefiting from collective discoveries made by the group. As a result, the software supply chain is becoming more resilient overall. What once took years of scattered manual work can now move forward in weeks through shared AI capabilities and coordinated action.
Our Take
AI Security Take
Anthropic’s launch of Project Glasswing signals a structural shift in how the cybersecurity industry operates. Frontier AI models are now being deployed defensively at industrial scale to protect the software supply chain. The rapid discovery of thousands of long-standing zero-day vulnerabilities in a short period demonstrates both the tremendous opportunity and the growing urgency facing organizations today.
For governance, security, and monitoring teams, this development means AI-augmented attackers and defenders will soon operate at machine speed. The gap between what attackers can achieve and what traditional defenses can handle is narrowing quickly. Organizations will need much stronger runtime visibility, behavioral controls, and auditable oversight to safely manage and monitor these powerful AI systems once they are running in production environments.
The line between offensive and defensive AI capabilities continues to blur. This makes robust governance frameworks more important than ever before. As collaborative AI-driven security initiatives like Project Glasswing expand, enterprise teams must ensure they have the right layers of monitoring and control in place to maintain accountability and evidence of proper operation. GAIG tracks platforms that deliver the governance, monitoring, and enforcement layers required as AI-powered security becomes the new standard across the industry. The real challenge ahead lies in connecting these powerful AI capabilities to verifiable, auditable control in live environments.