AI finds the bugs. Humans still have to fix everything else.

Gettyimages.com/Anucha Tiemsom
The Mythos experiment is real and impressive. But the threat actors who actually breach organizations aren't waiting on it, writes A. Stryker of Fable Security.
Assume Anthropic's much-discussed Mythos AI model scales perfectly. Assume the economics solve themselves.
Assume every organization that needs Mythos-class vulnerability discovery eventually gets access to it, at a price they can afford, in time to matter. Walk me through your plan.
I'm not being dismissive—I'm being serious. Because that thought experiment is where the AI vulnerability discovery conversation breaks down, and it's where the actually useful work begins.
The Cloud Security Alliance, SANS, OWASP, and a coalition of practitioners who clearly did not sleep last week pulled together a real, rigorous rapid-response brief on Mythos and Glasswing in roughly 48 hours. It's worth reading—not just for the defensive recommendations, which are solid, but for what it admits in the appendix almost as an aside.
The historical collapse in time-to-exploit has not yet produced a proportional increase in the impact of exploitation.
That is, most of the most consequential incidents of recent years involved credential abuse, social engineering, or supply-chain compromise rather than zero-day exploitation exemplified by the Mythos experiment.
The people who wrote the Mythos panic brief included that sentence in the same report that’s being shared widely and quoted breathlessly by headlines… it’s just in the footnotes.
Now, that concession is not a reason to dismiss the genuine capability shift. It's real. Anthropic’s team found a 17-year-old FreeBSD vulnerability—with no human involvement after the initial prompt—and did it fairly accurately, scoring 93.9% on SWE-bench Verified and 83.1% on CyberGym.
I salute and applaud every individual involved for taking the hard road instead of the profitable one.
But back to the thought experiment. Let's say it works. Let's say AI finds everything.
You still need a person to tell the open-source maintainer—the one running a critical project on volunteer time, with no security budget—that there's a critical flaw in their code, and to make them believe it's real and not another hallucinated AI slop report. (The Linux kernel team went from two bug reports a week to ten, initially mostly hallucinated, before the reports became verified—a shift that took time and human judgment to navigate.)
You still need a person to triage which of the 14,400-plus exploits VulnCheck tracked in 2025 actually applies to your environment this week, given your stack and your constraints.
And—more specifically for the Mythos / Glasswing discussion—fewer than 1% of the vulnerabilities Mythos found have been patched, for two main reasons:
- Not every vulnerability is worth patching, or even able to be exploited. That triage takes time and expertise from highly trained humans while they correct the outputs from an overeager model presenting every imperfection as zero-day bugs.
- Patching at scale is a human coordination problem and always has been.
And I fully predict that in the next few weeks, we’ll have open-source maintainers clicking phishing emails that look like they’re from Glasswing, notifying them about a major vulnerability that the Mythos project was kind enough to find for them.
Because the AI arms race—as real as it is—doesn't change what actually bites most organizations.
It’s the credential abuse. Spearphishing. Supply-chain compromise at the human layer, as the axios-NPM breach after a week-long social engineering exercise showed us recently.
The CSA brief says so. Business email compromise (BEC) attacks outperforming ransomware attacks in operational impact says so. The latest data breach analyses all say so.
The most sophisticated threat actors in the world are not waiting on Mythos-class exploit discovery when a well-crafted pretexting campaign gets them further, faster, cheaper.
Security is only as strong as the people who have to make real-time decisions in its name, every single day—usually without enough context and often under pressure.
I push back on the instinct to respond to AI-accelerated offense with AI-accelerated defense and call it a strategy. Not because AI defense tools aren't valuable—they are—but because "AI vs. AI" is an answer to the wrong question. It leaves that critical human layer not just unaddressed, but actively deprioritized.
And when the AI bill eventually comes due—and it will—organizations that hollowed out their human programs to chase the arms race are going to find out they bought one layer of defense and abandoned another.
The same AI capabilities that make Mythos impressive are already very good at something less dramatic but more universally applicable: getting the right security information to the right person. AI can do that at scale, right now, without a slot in the Glasswing partner program.
Mythos is impressive. Glasswing is serious work. But the organizations that come out ahead of this moment won't be the ones that reacted fastest to the announcement. They'll be the ones that used it to ask a better question: not "how do we match the AI on the other side," but "how do we make sure every person in this organization is equipped to be part of the defense?"
People are the patch. AI helps—when you use it to empower your people, not just replace them.