Claude Mythos: Preparing for a World Where AI Finds and Exploits Vulnerabilities Faster Than Ever

Anthropic's new model can autonomously discover zero-days and develop working exploits. While access is currently limited to responsible actors, now is the time to strengthen response playbooks, reduce exposure, and incorporate AI into security programs.

TL;DR: Anthropic's new model is capable of autonomously discovering zero-day vulnerabilities and developing working exploits. While access is currently highly restricted, Claude Mythos gives us a crystal ball into the near future where these capabilities are in the hands of attackers. 2026 is the year to prepare for the emerging AI-led vulnerability wave; in the short term, get ready for a large influx of AI-discovered CVEs in critical software. In the medium-to-long term, plan to invest efforts into an AI-focused AppSec program, which will ensure you find the AI vulnerabilities before threat actors have a chance to exploit them.

What is Claude Mythos?

Claude Mythos is a new, unreleased frontier model developed by Anthropic. This model has autonomously discovered thousands of zero-day vulnerabilities in major operating systems and web browsers. Current publicly available frontier models are already capable, to some extent, of patch-diffing and generating exploits when pointed to the relevant code snippets. However, Anthropic has also demonstrated that Mythos can take a CVE identifier and a git commit hash as input, then autonomously produce a full working exploit within hours, at relatively low cost. Additionally, it can reportedly chain multiple vulnerabilities together, and reverse engineer closed-source binaries.

This announcement signals the continuation of a trend (one that's picked up in the past year): AI models are beginning to excel at technical vulnerability research, which makes sense given the positive feedback loop at their core.

What should we expect in the near future? 

Given the current industry trajectory, we see a necessary shift in how security products are built and how defenders operate. These emerging capabilities should be incorporated into the core of our security programs and products so that we can adapt to an eventual reality in which malicious actors are doing the same with their own tooling.

We’re currently thinking about the problem space of AI-enabled security research on a timeline that is composed of three main phases. We believe that success at each phase will require security teams and security vendors to adopt new strategies and tools:

Short term: expect more CVEs

Right now, Mythos is only in the hands of responsible actors -- critical software infrastructure providers like Microsoft, Google, and the Linux Foundation. The model is not publicly available, and Anthropic states that they have no plans to change that.

Therefore, the most immediate consequence is simply more CVEs. Security researchers using models like Mythos will discover zero-days, prove their exploitability, and responsibly disclose them to software vendors and open source project maintainers. Every vulnerability discovered using these models will eventually (or at least hopefully) become a public CVE with a patch available to end users.

These CVEs are unlikely to be immediately accompanied by public exploits, but attackers will invest in (AI-assisted) patch diffing, whether the affected product is open or closed source. This will ultimately lead to parallel exploitation in the wild of a higher number of recently published vulnerabilities. Unless security teams scale up their own response capabilities, attackers will have more opportunities to take advantage of unpatched systems.

At this stage, when releasing patches for vulnerabilities in their own software, it’s important that vendors invest in making sure that patching their products is as seamless and painless as possible, to support end-users dealing with the onslaught of new CVEs.

Medium term: prepare for the “Y2K moment”

"An ounce of prevention is worth a pound of cure."

It’s only a matter of time before models with capabilities similar to those of Mythos -- from Anthropic, Google, OpenAI, DeepSeek, Alibaba, and others -- become available to the public. We estimate that it will take roughly 12-18 months before these capabilities reach open-source models that anyone can run locally and without restrictions. From that point onwards, we should assume that malicious actors will be able to use AI models to discover and weaponize 0-day vulnerabilities at scale, and also rapidly weaponize n-days within hours of their public disclosure.

Beyond vulnerabilities in installable software, a primary attack surface for most modern enterprises today is their own API endpoints and web applications, where vulnerabilities tend to be logic-driven: authentication bypasses, broken authorization, exposed endpoints, and misconfigured access controls. In fact, current frontier models are already effective at identifying and exploiting these weaknesses.

Yet we are optimistic. As history has shown, every new technology in cyber that can be used by attackers can also be used by defenders, and the final outcome is more secure systems. Memory fuzzers, historically used by vulnerability researchers, are now embedded into core development processes, ultimately producing more secure software. Similarly, projects like Google's CodeMender are using AI to automatically fix security flaws at scale.

We may be heading towards a “Y2K moment”, in two important ways: on one hand, it’s an unavoidable and stressful change that may have unknown consequences. On the other, we believe the worst consequences (both knowable and unknowable) can be avoided if we work together to make preparations ahead of time, with the final outcome being more secure software, not less.

So what do we need to do? Make plans to allocate people and resources to application and code scanning (AI-assisted AppSec) as a high priority in 2026. Since the new models are still not widely available, we have time to create these joint teams across security and engineering. On the security vendor side, we must incorporate the latest AI capabilities into our tooling so customers always remain ahead of attackers.

Organizations that start building this muscle now -- using the latest AI models not just to surface vulnerabilities, but to rapidly and safely fix them in production environments -- will have a meaningful advantage over attackers whenever the next generation of models arrives. The best long-term strategy for defenders is to move away from static, siloed tools and toward dynamic security workflows with AI capabilities at their core.

Long term: how will AI change cybersecurity?

What happens once we’re past the “Y2K moment”, and advanced AI capabilities are in the hands of malicious actors?

The window between "patch available" and "exploited in the wild" is already shrinking and can be expected to shrink even faster. This means that security teams need to either be ready to patch faster than before, or do the hard work required to make software vulnerabilities a non-issue in their environment through a combination  of attack surface reduction and defense in depth.

Looking ahead, we believe that although the core concepts of cybersecurity won’t change, the latest advancements in AI capabilities will dramatically change how we approach cybersecurity in a few key areas:

Attackers will continuously get better

We cannot foresee exactly how future AI models may empower threat actors, as the status quo even one year from now is hard to predict. But we can assume that every new generation of models will unlock greater opportunities to exploit more weaknesses at a faster pace and scale, and at a lower cost.

Imagine securing a house. We all have locks on our doors, and we assume that modern door locks are effective because most people aren’t brilliant locksmiths. But what happens when every person on the planet has access to an AI-powered locksmith? The locks we have today won’t be relevant anymore.

The most capable AI agents will set the security bar

This principle will hold true for every product category: firewalls, EDRs, DLP, email security, etc. The bar will continue to go up, meaning static security tooling will quickly become irrelevant, much like traditional cryptography in a post-quantum future. Security tools must therefore become dynamic, continuously being updated with the latest AI capabilities.

However, if attackers and defenders both have access to the same models, breaking that symmetry will require security teams to leverage context. By taking advantage of your internal knowledge of your own architecture and making it available to AI-powered security tooling, you can prioritize hardening and remediation efforts to focus on where they’ll be most impactful, and fix vulnerabilities at their root cause.

Resilient system design will no longer be optional

In a world where vulnerabilities are prevalent and exploitation is immediate, we must design systems with an "assume RCE" mentality. If your organization’s security posture depends entirely on your security team being able to patch vulnerable resources faster than attackers can exploit those vulnerabilities to compromise them, you will eventually find yourself fighting a losing battle. We will never be able to patch fast enough; secure design must ensure isolation for critical components. For security tooling, AI resiliency means the ability to continuously improve in the face of new AI-powered attacks. Static security tools (the door locks) will fail.  

Software can and must be more secure than ever

We can envision a world where software becomes dramatically more secure. Imagine a reality where every person on earth can produce the most secure software ever as a result of democratizing security knowledge and expertise through AI-assisted code security tooling.

What security teams should do right now

First, accelerate your vulnerability and patch workflows. You need to shorten your patch windows for when containment timelines shrink from days to hours. Move towards automation and pre-approve mitigation steps (like blocking public access) so your team has options while testing patches. Prioritize based on context and real-world threat intelligence, knowing that the "patch immediately" bucket will only grow.

Second, aggressively reduce your attack surface and blast radius. Internet-facing resources present the highest risk as AI-assisted exploitation speeds up. Build defense in depth by minimizing public exposure of resources with sensitive data, secrets or high privileges -- even if they aren't currently vulnerable.

Finally, democratize your security organization through AI capabilities. Empower your teams today to begin benefiting from AI-based security workflows, from vulnerability discovery to automated remediation. Use AI code generation tools (Claude Code, Cursor, etc.) to fix code issues, estimate the impact of code changes, and accelerate patching processes. Setting this foundation now ensures you’ll be ready to immediately leverage new frontier AI models to defend your environment more rapidly than attackers can use those models to target it.

How Wiz can help

Wiz is already committed to helping our customers democratize security, prioritize remediation tasks, and proactively reduce their attack surface. Everything we’ve built at Wiz has been getting us ready for this moment. Understanding the context of the environment is what powers our AI agents.

We continuously evaluate the capabilities and performance of the latest AI models and incorporate them into our products to further streamline vulnerability detection, remediation, and response for security teams. Additionally, our MCP server enables security teams to consume Wiz’s deep context and risk analysis as part of their agentic workflows.

Shift Right to detect vulnerabilities before attackers do

Our customers can use Red Agent to scan their entire attack surface with an AI-powered attacker that continuously adapts and utilizes an ever-growing corpus of contextual information based on cloud, workload and code analysis in order to discover immediately exploitable risks.

Shift Left to rapidly and safely fix those vulnerabilities

Customers need end-to-end platforms that don’t just help find the vulnerabilities but also help fix them. This is why we’re investing in Green Agent to help companies easily identify root causes (cloud-to-code) and automatically deploy the best fix.

Detect and respond to AI-enabled attackers

In a world where 0-day, 1-day, (and perhaps even 1-hour) vulnerabilities are the norm, our runtime protection tools must be able to understand code and utilize that context to more accurately detect exploitation and post-compromise activity.

The connection between cloud, code and runtime is the foundational strategy of Wiz Defend. Our goal is to allow defenders to rapidly triage suspicious behavior. By automating the entire process with Blue Agent, SOC teams can investigate AI-enabled attacks at the speed of AI.

Summary

Claude Mythos is a meaningful milestone, but the shift it represents has been building for a while now. The best response is preparation, not panic. We have a window of opportunity in which to build continuous AI-powered security into everything we do, ultimately making software more secure than ever.

AI offers the ability for security teams to use their internal context to get the upper hand on attackers -- from finding vulnerabilities, to fixing them, to responding to exploitation. This is the only way to stay ahead of attackers who we see increasingly adopting these capabilities into their arsenal.

Continue reading

Get a personalized demo

Ready to see Wiz in action?

"Best User Experience I have ever seen, provides full visibility to cloud workloads."
David EstlickCISO
"Wiz provides a single pane of glass to see what is going on in our cloud environments."
Adam FletcherChief Security Officer
"We know that if Wiz identifies something as critical, it actually is."
Greg PoniatowskiHead of Threat and Vulnerability Management