The Bot Market
News
31 Mar 2026 · 8 min read

Anthropic leaked its most powerful model from an unsecured bucket: you really can't make this up

By Andy Webb

A vault door standing open with documents spilling out

Oops! Yesterday we wrote about Anthropic's 50-features-in-52-days pace and raised the issue that moving this fast creates coordination problems no one has solved yet. We didn't expect Anthropic to prove the point within 24 hours.

But... here we are.

Anthropic, the company that builds what is arguably the most capable AI system on the planet, the company whose own model just discovered and disrupted a Chinese state-sponsored hacking campaign, the company that warns governments about unprecedented cybersecurity risks, left nearly 3,000 internal documents sitting in a public, unsecured, searchable data store. Anyone with a browser could find them. Oh boy!

Among those documents: a draft blog post announcing their next model, Claude Mythos. The most powerful AI they've ever built. A model they themselves describe as posing "unprecedented cybersecurity risks." Leaked, by their own hand, through a misconfigured CMS.

Ohhh the irony.

So, what actually leaked then...

Fortune broke the story on 26 March after cybersecurity researchers Roy Paz from LayerX Security and Alexandre Pauwels from the University of Cambridge independently discovered the exposed data. Close to 3,000 assets linked to Anthropic's blog were publicly accessible: draft posts, PDFs, images, internal memos, and what appears to be structured data for a planned product launch page.

The headline finding: Anthropic has completed training a new model called Claude Mythos, part of a new tier they're calling "Capybara." If those names sound like they were generated during a late-night brainstorming session, you're probably not wrong. Both appear to refer to the same underlying model. Anthropic says the name was chosen to evoke "the deep connective tissue that links together knowledge and ideas." We'd have gone with something less... rodent-like, but... each to their own.

What really matters is what the model can do. According to the leaked draft, Capybara sits above Opus in Anthropic's lineup. Not a small upgrade. A new tier entirely. Their current hierarchy runs Haiku (small, fast, cheap), Sonnet (balanced), and Opus (largest, most capable). Capybara/Mythos would be a fourth tier above Opus: larger, more intelligent, more expensive, and by Anthropic's own assessment, dramatically better at coding, academic reasoning, and cybersecurity tasks.

Anthropic has since confirmed the model is real. A spokesperson told Fortune it represents "a step change" in performance and is "the most capable we've built to date." They're testing it with a small group of early access customers. No public release date has been announced.

The cybersecurity problem is genuinely alarming though

And, this is where the story stops being funny and starts being concerning.

Anthropic's own draft describes Mythos as "currently far ahead of any other AI model in cyber capabilities" and warns it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Now, read that sentence twice. The company building the model is telling you, in their own words, that it can find and exploit software vulnerabilities faster than human defenders can patch them. Nothing to see here, obviously.

The market certainly believed them. When the leak hit the news merry-go-round, cybersecurity stocks had one of their worst single-day sell-offs in recent history. CrowdStrike dropped over 7%. The logic was brutal and simple: if the next generation of AI models makes every hacker a nation-state-level threat, then every cybersecurity company's value proposition just got a whole lot weaker.

Anthropic's planned rollout certainly reflects their concern. Rather than a normal broad consumer launch, they're giving early access to cybersecurity organisations first, letting defenders get a head start before the model reaches wider distribution. This is the same playbook they used with Opus 4.6, which was already finding previously unknown vulnerabilities in production codebases. Mythos is reportedly a significant step beyond even that.

And this isn't theoretical for them either, they've got a track record here. Anthropic has already dealt with real-world abuse. Late last year, they discovered that a Chinese state-sponsored group had been running a coordinated campaign using Claude Code to infiltrate roughly 30 organisations, including tech companies, financial institutions, and government agencies. Anthropic detected it, banned the accounts, and notified the affected companies. But the fact that it happened at all, and with a model less capable than Mythos, should concentrate the mind.

What else was in the bucket; not the Epstein files sadly...

An open bucket tipped on its side with documents spilling out

The Mythos reveal is without doubt the headline, but the rest of the leak also tells its own story.

Among the 3,000 publicly accessible assets: details of an upcoming invite-only CEO summit at an 18th-century English manor turned luxury hotel. Anthropic CEO Dario Amodei will attend. The document describes it as an "intimate gathering" for "Europe's most influential business leaders" to experience "unreleased Claude capabilities." The attendee names weren't listed, but the vibe is very clear: this is Anthropic's enterprise sales push for the European market, dressed up in country house elegance. A Downton Abbey pitch deck, essentially, which would make a welcome change from the regular accented bars Claude defaults to.

There was also apparently a document with an employee's parental leave details in the mix. There's a lawsuit. Internal memos. Unused blog assets. The kind of material that exists in every CMS, but it is supposed to stay private behind locked doors.

The root cause, according to Anthropic, was "human error" in the configuration of their content management system. The CMS sets new digital assets to public by default. Someone, or several someones, didn't change that setting. For close to 3,000 files.

And so the coordination gap strikes again

A speedometer buried in the red with the dashboard cracking apart

We wrote yesterday about the coordination gap: what happens when engineering moves at 10x speed but every other function in the business stays the same. We argued that marketing, sales, documentation, compliance, and support can't keep pace when development ships 50 features in 52 days.

We didn't include "information security" in that list. Our fault, we should have.

This leak isn't a sophisticated attack. It's not a zero-day exploit or a state-sponsored intrusion. It's someone uploading draft content to a CMS with the default visibility set to public. It's the kind of mistake that happens when a company is moving incredibly fast and the processes around that speed haven't caught up, you know, like writing tests...

Anthropic has approximately 1,500 employees, some of the sharpest minds in AI, billions in funding, and they left a blog post about their most sensitive model in an open bucket because someone didn't tick a box in a content management tool. This is not an engineering failure. It's an organisational one. And it's precisely the kind of failure that happens when velocity outpaces coordination.

A company building the tools that let other companies ship at impossible speed just demonstrated, publicly and embarrassingly, that speed without coordination creates exactly the kind of risk their own new model is designed to exploit.

You really can't make this up.

How do we learn from things like this?

Strip away the irony and there are three key lessons here.

First, a new tier of AI model is coming for us to worship. Mythos/Capybara will sit above Opus, which is already the most capable model available through Claude Pro. When it launches, expect it to be expensive, limited in access, and aimed initially at enterprise and security customers rather than the $20/month subscription most of you enjoy. Anthropic are tightening the purse strings as has been seen by their rather subtle lowering of usage limits. If you're currently paying for Claude Pro or using the API, this won't replace your workflow immediately. But it will trickle down in time.

Second, the cybersecurity landscape is about to change, and honestly, I find that rather terrifying with the world the way it is right now. If Anthropic's own assessment is accurate, Mythos represents a step change in AI-driven cyber capability. The defensive applications are real and to be applauded: finding vulnerabilities before hackers do. But the offensive applications are equally terrifyingly real: automating the exploitation of those vulnerabilities at a scale never seen before; who'd want to be a CISO... Every company running software, which is every company, needs to pay attention and prepare for this.

Third, the leak itself is a case study in operational risk. If Anthropic can make this mistake, anyone can. The CMS default-to-public setting is a trap that exists in dozens of content management tools. If your company uses a CMS, check your default visibility settings today. Not tomorrow. Today.

How might this develop

Anthropic hasn't announced a release date for Mythos. The leaked draft said the model is "very expensive for us to serve, and will be very expensive for our customers to use" and that they're working to make it more efficient before a general release. Expect a slow, controlled rollout starting with API access for enterprise customers, eventually filtering down to Claude Pro and Claude Team subscribers.

The new "Capybara" tier is a structural change worth watching closely. Anthropic's current lineup (Haiku, Sonnet, Opus) has been stable for a while. Adding a tier above Opus changes the pricing conversation for every customer and every competitor. OpenAI will need to respond. Google will need to respond. And the companies building on these APIs will need to decide whether the capability jump justifies the cost, and let's face it, we're all tech junkies (Thanks Apple); hooked on the latest and greatest the world can offer and scared of being left behind.

For us at The Bot Market, we'll be among the first to test Mythos when it becomes available and give you an honest assessment of whether it's a genuine step change or another round of benchmarks that don't translate to real-world use. We've seen enough "most powerful model ever" announcements to know that the proof is in the prompting, not the press release.

...Even when the press release leaks from an unsecured bucket.

The sharpest AI tools intel, weekly.

Join thousands of professionals navigating the AI tools landscape. Free, no spam, unsubscribe anytime.

Keep reading