International banks, major technology companies and governments were swept into a cybersecurity “hysteria” last month as they scrambled to manage the dangers associated with Mythos, the Anthropic AI model said to be so sophisticated that it uncovered thousands of previously hidden flaws in global software infrastructure.
There’s only one complication: the capability sparking concern is already available.
Cybersecurity analysts and artificial intelligence experts told CNBC that the software weaknesses exposed by Mythos can already be identified using existing AI systems, including models developed by Anthropic and OpenAI.
“What we are seeing across the industry now is that people are able to reproduce the vulnerabilities found with Mythos through clever orchestration of public models to get very, very similar results,” said Ben Harris, CEO of cybersecurity company watchTowr.
Mythos has rattled corporate leaders and policymakers alike amid growing fears that a dangerous new phase of AI-driven cybercrime could soon emerge. Anthropic restricted access to only a few U.S. companies, including Apple, Amazon, JPMorgan Chase and Palo Alto Networks, in an attempt to stop malicious actors from gaining access to the technology.
Despite those precautions, the release reportedly encouraged the Trump administration to explore new forms of government supervision for future AI models.
It is the latest in a series of prominent launches from Anthropic that have intensified its competition with OpenAI as both AI firms move closer to long-anticipated public listings. Weeks after Mythos debuted, OpenAI CEO Sam Altman introduced GPT-5.5-Cyber, a model specifically engineered for cybersecurity operations.
OpenAI on Thursday provided limited access to GPT-5.5-Cyber to carefully vetted cybersecurity teams.
The tightly controlled release of Mythos, part of a security initiative called Project Glasswing, was designed to give businesses enough time to reinforce their cyber defenses before an anticipated surge of attacks from criminal organizations and hostile nations.
“The danger is just some enormous increase in the amount of vulnerabilities, in the amount of breaches, in the financial damage that’s done from ransomware on schools, hospitals, not to mention banks,” Anthropic CEO Dario Amodei said this week during an Anthropic event.
‘Scary enough’
However, for experts already operating on the front lines of cyber warfare, one of the primary abilities promoted by Anthropic — locating software vulnerabilities at scale — has reportedly existed for quite some time.
“The models that we have right now are powerful enough to detect zero days in a large scale, and this is scary enough,” Klaudia Kloc, CEO of cybersecurity company Vidoc, told CNBC.
According to Kloc, that capability has existed for “a couple of months, if not a year.”
The phrase “zero-day” refers to a previously undiscovered software weakness that has not yet been fixed, giving attackers an opportunity to exploit it before defenders can respond.
Researchers at Vidoc relied on a process known as “orchestration” to test whether they could uncover the same vulnerabilities Mythos identified. The method involves building workflows that divide code into smaller sections while coordinating several tools or AI systems to cross-check findings.
“We ran older models against the same code base to see if we’d be able to detect the same vulnerabilities,” Kloc explained. “We did, with both OpenAI and Anthropic’s older models.”
Another cybersecurity firm, Aisle, determined that many of Mythos’s headline discoveries could be recreated using lower-cost AI systems operating simultaneously — indicating that scale and coordination mattered more than simply using the latest model.
“A thousand adequate detectives searching everywhere will find more bugs than one brilliant detective who has to guess where to look,” Aisle founder Stanislav Fort wrote in a blog post.
In comments made to CNBC, Anthropic did not challenge claims that earlier AI models were already capable of detecting software vulnerabilities.
In fact, a spokesperson explained that Anthropic has spent months cautioning that AI’s cybersecurity capabilities were evolving rapidly. The company referenced a February blog post showing that Claude Opus 4.6, a publicly accessible model, identified more than 500 “high severity” vulnerabilities in open-source software.
Speaking during the Anthropic event this week, Amodei reinforced that message, stating that while Mythos significantly expanded the scale of vulnerabilities uncovered, the broader trend itself was not unexpected.
“The risks are very real. This is why we took the actions we did,” Amodei said. “But they’re also, in some sense, not that surprising. … We’ve been seeing warnings of this for a while.”
Hysteria and panic
What distinguishes Mythos, according to Anthropic, is its ability to move further by generating functional exploits with little or no human assistance, effectively automating a process that previously demanded highly skilled cybersecurity researchers.
Still, cybersecurity specialists argue that hackers working for criminal organizations and hostile governments already possess those abilities. Hackers in North Korea, China and Russia “know how to do this, with or without Anthropic,” Kloc said.
The growing threat of AI-assisted hacking has left corporations and regulators increasingly anxious about defending critical systems from a fresh wave of ransomware and cyberattacks, according to Harris.
He described recent discussions with banks, insurers and regulators as “hysteria.”
Even before the rise of generative AI, companies were already dealing with skilled hackers exploiting newly discovered vulnerabilities within hours, while repairing those flaws often required days or even weeks. In many situations, applying security patches means temporarily shutting down important systems, making the challenge even harder.
“The industry is panicking about the number of vulnerabilities they face now,” Harris said. “But even before Mythos is widely available, it couldn’t fix vulnerabilities fast enough.”
Previously, only a relatively small group of experts worldwide had the expertise and time necessary to discover obscure software weaknesses and exploit them, Harris noted. Now, with publicly available AI systems, the barriers to launching cyberattacks have dropped significantly.
That means banks and other organizations are likely to face a growing number of attacks, while software systems that once attracted little attention from cybercriminals could increasingly become targets, Harris added.
Advantage: Offense
While Anthropic, OpenAI and other technology companies are attempting to build cyber defense systems capable of matching the threats they have identified, researchers say the early advantage currently belongs to attackers rather than defenders.
Jamie Dimon appeared to acknowledge this last month when he said that although AI tools could eventually help businesses defend themselves against cyberattacks, they are currently making organizations more vulnerable.
“You have a significant increase in the volume of vulnerabilities discovered, but they don’t seem to have deployed a tool that helps you fix them,” said Justin Herring, partner at law firm Mayer Brown and former executive deputy superintendent for cybersecurity at New York’s financial regulator.
“Vulnerability management is the great Sisyphean task of cybersecurity,” Herring added.
The limited group involved in the original Mythos rollout received an early advantage in patching vulnerabilities, but critics say there is also a drawback. Independent AI researchers still have not been granted access to Mythos to verify Anthropic’s claims or begin developing defenses against it.
Some experts argue this prevented the wider cybersecurity community from participating in the solution.
It has created “tiers of haves and have-nots,” which could slow cybersecurity innovation, said Pavel Gurvich, CEO of cybersecurity startup Tenzai, which uses Anthropic’s AI models.
Many cybersecurity startups are now racing to create solutions that can help businesses navigate this new AI era, he added.
“They’re trying to figure out the best way to fix the world before this becomes accessible to the world,” said Ben Seri, co-founder of cybersecurity startup Zafran Security. “It’s this kind of chicken-and-egg situation, and you’re going to break some eggs. It’s unavoidable.”
Source: CNBC Edited by Bernie