Project Glasswing: Securing critical software for the AI era
Anthropic launches Project Glasswing to harden critical software against AI-driven cyber threats, starting with open-source secure-by-design tools.
Image: GlobalBeat / 2026
AI software security gets $10m Anthropic push to shield critical systems
Anthropic unveiled Project Glasswing on Tuesday, a $10 million initiative to harden critical software against AI-powered attacks.
The San Francisco lab said the program will fund open-source tools that let defenders scan millions of lines of code for flaws that large language models can now spot faster than humans.
Security teams have watched nervously as generative AI cuts exploit-discovery time from weeks to minutes. Glasswing aims to flip that advantage by giving engineers the same automated firepower before deployment.
“Attackers are already using Claude and rival models to find zero-days at scale,” Anthropic security director Jason Clinton told reporters. “We either arm the defenders or accept a permanent disadvantage.”
The first grants, sized between $50,000 and $500,000, will land in July. Winners must release their code under permissive licenses so power-grid operators, hospital networks, and government agencies can adopt them without vendor lock-in.
Clinton said Anthropic will not demand exclusive rights or data-sharing, a departure from venture terms that often kneecap open-source projects. “We want these tools everywhere critical infrastructure runs,” he added.
Awards will favor projects that weave formal verification into CI/CD pipelines, the automated scripts that push updates to cloud containers and on-premises servers. Formal verification mathematically proves a program meets specified properties, but the technique has lagged behind DevOps speed. Glasswing money targets bridges that let verification keep pace with daily releases.
Early contenders include a Rust crate that translates LLVM intermediate representation into SMT queries, and a Python framework that turns English security policies into machine-checkable proofs. Both prototypes found memory-safety bugs in popular TLS libraries during pilot audits, according to summaries posted to GitHub.
The announcement lands one week after Microsoft reported state actors used GPT-4 to generate polymorphic malware that evaded signature detection. Chinese-group Volt Typhoon and Iranian cohort Cotton Sandstorm both fed model output into scripting engines that rebuilt payloads on every infection, Microsoft said.
Critical infrastructure bears the brunt. A March 2024 admission by Change Healthcare that a ransomware gang spent 9 days undetected inside medical-claims software cost the company an $872 million writedown. Investigators traced the breach to a leaked credential that GPT-based reconnaissance likely unearthed from old breach dumps, according to a Senate briefing memo circulated last month.
CISA director Jen Easterly welcomed Glasswing in a statement, calling it “a down-payment on keeping U.S. codebases ahead of adversaries who automate vulnerability research.” The agency will share classified threat signatures with grantees under a 2023 memorandum that already feeds indicators to Google and Microsoft.
Private-sector reaction split along competitive lines. Cloudflare CEO Matthew Prince tweeted the move “raises the ceiling for everyone,” while Palo Alto Networks vice-president Wendi Whitler warned that “throwing grants at academic tools won’t close the adoption gap until vendors integrate them.” Start-up Zellic, which sells AI-powered audits for $25,000 per contract, saw three prospects pause talks to “see what Glasswing produces for free,” founder Stephen Tong said.
Open-source maintainers greeted the news with cautious optimism. Tokio project lead Alice Ryhl said sponsorship frees contributors from “consulting treadmill” pressure, but she worried sustained funding ends after the initial $10 million. Clinton declined to commit additional capital, saying Anthropic will “evaluate outcomes” before deciding on a second tranche.
Background
Formal verification dates to the 1970s, when computer scientists proved the correctness of encryption algorithms like RSA. The technique stalled outside academia because writing proofs took longer than writing code. Renewed interest followed the 2014 Heartbleed bug, which exposed private keys in 17 percent of HTTPS servers. Linux Foundation research estimated the flaw cost the global economy $500 million in patching and certificate re-issue fees.
AI accelerates both sides of the arms race. In 2023, researchers at Cornell showed GPT-4 could locate 26 of 45 purposely planted bugs in the Linux kernel, beating static-analysis tools that found 11. Months later, a team at Seoul National University demonstrated an adversarial agent that iteratively rewrote source until identical behavior passed verification checks, effectively laundering exploits through formally verified code. Glasswing tries to tip the balance back toward defenders by coupling proof generation with runtime monitoring that flags behavioral drift.
What’s Next
Awards committee chair Sofía Celi said winners must ship alpha versions by December and production releases within 18 months. Anthropic pledged cloud credits and early access to Claude 4 reasoning models due this fall, betting tighter integration will surface attack patterns before public release. Celi will present preliminary metrics at the DEF CON 33 security conference in August, where hackers traditionally test new tooling against live targets.
If uptake lags, the initiative risks becoming another archive of abandoned proofs. Success depends less on algorithms than on whether overworked engineers adopt unfamiliar workflows before attackers scale their own AI arsenals.
Technology & Science Editor
Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.