TL;DR

Anthropic left nearly 3,000 internal files in a publicly searchable data store, including draft blog posts describing a new model called Claude Mythos. It sits in a brand-new “Capybara” tier above Opus and reportedly scores far higher than Claude Opus 4.6 on coding, reasoning, and cybersecurity benchmarks. Cybersecurity stocks dropped 4-7% on the news. Anthropic confirmed the model exists, called it “a step change,” and said it’s being tested with early-access customers. The company that warns about AI cybersecurity risks got caught by a default CMS setting.

What Actually Leaked

Fortune reporter Bea Nolan discovered the exposed files on Thursday, March 26. The root cause was embarrassingly mundane: Anthropic’s content management system had a default setting that made uploaded assets public unless someone explicitly toggled them private. Nobody toggled.

The result: close to 3,000 assets linked to Anthropic’s blog — draft posts, images, PDFs, internal graphics — sat in a publicly searchable data store. Security researchers Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge confirmed the scope.

Among the exposed documents:

  • Two versions of the same blog post announcing “Claude Mythos,” differing only in the model name — one called it “Mythos,” the other “Capybara”
  • A PDF invitation for an exclusive two-day CEO retreat at an 18th-century English manor, where Dario Amodei would give private strategy briefings
  • Internal images, including one referencing an employee’s parental leave

Anthropic restricted access after Fortune reached out but confirmed the model’s existence, calling it “a step change” and “the most capable we’ve built to date.”

Capybara: A Fourth Tier

Anthropic currently sells models in three tiers: Haiku (fast and cheap), Sonnet (balanced), and Opus (most capable). Capybara would add a fourth tier above all three.

The leaked drafts describe Capybara as “a new name for a new tier of model: larger and more intelligent than our Opus models — which were, until now, our most powerful.” Mythos appears to be the first model in this tier, the same way Opus 4.6 is the current top of the Opus line.

What this means in practice: expect higher prices. The drafts acknowledge the model is “very expensive for us to serve, and will be very expensive for our customers to use.” Anthropic says they’re working to make it more efficient before any general release. No specific pricing was included in the leaked materials.

What Mythos Can Reportedly Do

According to the leaked draft posts, Mythos scores “dramatically higher” than Claude Opus 4.6 on tests of software coding, academic reasoning, and cybersecurity tasks. Opus 4.6 had just topped Terminal-Bench 2.0 at 65.4%, beating GPT-5.2-Codex, so “dramatically higher” would be a significant jump.

But here’s the problem: every article reporting on this quotes the same phrase — “dramatically higher scores” — and nobody has the actual numbers. How much higher? On which benchmarks? Is this a 5% gap or a 50% gap? All capability claims trace back to Anthropic’s own drafted marketing materials. No third-party researcher has tested the model. No independent evaluation exists.

The draft also describes Mythos as being “designed to create deep connective tissue between ideas and knowledge,” which is the kind of phrase that sounds impressive but doesn’t map to any measurable property.

Take the claims seriously, but wait for independent benchmarks before drawing conclusions.

The Cybersecurity Angle

This is where the story gets genuinely interesting — and uncomfortable.

The leaked draft describes Mythos as “currently far ahead of any other AI model in cyber capabilities.” It goes further, warning that the model “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

Anthropic is being unusually direct here. Most AI companies downplay dual-use risks in their announcements. Anthropic chose to lead with the danger — the model is so capable at finding and exploiting vulnerabilities that the company believes it changes the attacker-defender balance.

Their planned mitigation: release Mythos first to cybersecurity defense organizations through an Early Access Program (EAP), then expand availability gradually. Give defenders a head start before the capability class becomes widespread.

This isn’t theoretical concern. Anthropic disclosed earlier this year that a Chinese state-sponsored group used Claude Code to infiltrate roughly 30 organizations — tech companies, financial institutions, and government agencies — before Anthropic detected and shut down the campaign. They’ve seen firsthand what happens when capable AI models are used for offensive operations.

Market Fallout

The leak hit financial markets hard. Cybersecurity stocks dropped across the board on Friday:

CompanyTickerDrop
Palo Alto NetworksPANW~7%
CrowdStrikeCRWD~6.4%
ZscalerZS~5.8%
FortinetFTNT~4%

The logic: if next-generation AI models can outpace human defenders, existing cybersecurity tools face a capability gap. Investors rotated out of the sector on that thesis.

Bitcoin also dropped to $66,000, part of a broader risk-off move triggered by the same AI-powered cyber threat narrative.

Whether this sell-off sticks depends on whether Mythos actually delivers on those benchmarks — and whether competitors (OpenAI’s “Spud,” Google’s Gemini 3.x) are close behind with similar capabilities.

The Irony Problem

I’ll state the obvious: a company that builds AI models with “unprecedented cybersecurity capabilities” exposed those models’ existence through a default CMS setting. This isn’t a sophisticated supply chain attack or a zero-day exploit. Someone forgot to click a checkbox.

And it’s not Anthropic’s first security stumble. In January 2026, a flaw resurfaced in Claude Cowork days after launch that allowed attackers to exploit the tool’s API to steal user data.

For a company founded explicitly on the premise of building safe AI — while competitors like GitHub are quietly expanding data collection — this pattern matters. The technical sophistication of their models keeps advancing. The operational security around those models keeps tripping on basic infrastructure hygiene. You can build the most capable AI system in the world, but if your CMS defaults are wrong, a reporter finds out about it before your customers do.

Anthropic characterized the exposed documents as “early drafts unrelated to its core infrastructure, AI systems, customer data, or security architecture.” Fair enough — no customer data or model weights leaked. But the reputational hit is real, and it feeds into a broader question about whether AI labs practice the security standards they preach.

What Happens Next

Based on the leaked documents and Anthropic’s confirmation:

Short term: Mythos stays in the Early Access Program. Cybersecurity defense organizations get first access. Pricing and general availability remain undefined.

Medium term: Anthropic needs to make Mythos cheaper to run before broad release. The drafts repeatedly flag inference costs as a blocker. Given that Opus 4.6 already carries premium API pricing, a tier above it could push costs into territory that limits the addressable market to enterprises.

Competitive context: OpenAI is reportedly finalizing “Spud,” its next major model after killing Sora and trimming non-core products. Google has Gemini 3.1 variants rolling out. The existence of a Capybara tier suggests Anthropic is betting on capability stratification — charging significantly more for significantly better models — rather than the race-to-free approach some competitors favor.

IPO timing: Anthropic’s IPO is rumored for around October 2026. Mythos could be the flagship product that anchors their public offering valuation. Anthropic is approaching $19 billion in annualized revenue; a Capybara-tier product with enterprise pricing could meaningfully accelerate that number.

FAQ

Is Claude Mythos available to use right now?

No. It’s in a limited Early Access Program restricted to cybersecurity defense organizations. Anthropic hasn’t announced general availability dates or pricing.

Does Capybara replace Opus?

No. Capybara is a new tier above Opus. Think of it as a premium layer: Haiku → Sonnet → Opus → Capybara. Existing Opus, Sonnet, and Haiku models aren’t going anywhere.

How much better is Mythos than Opus 4.6?

Nobody outside Anthropic knows. The leaked drafts say “dramatically higher scores” on coding, reasoning, and cybersecurity benchmarks, but no actual numbers have been published. Wait for independent evaluations.

Should I worry about the cybersecurity implications?

The concern is real but not immediate. Anthropic is restricting access specifically because of the dual-use risk. The broader worry — that AI models are approaching a capability threshold where automated exploitation outpaces human defense — is worth tracking regardless of what Mythos specifically can do.

Was any customer data exposed in the leak?

No. The exposed files were draft blog posts, marketing images, event invitations, and internal graphics. Anthropic stated that no customer data, model weights, or core infrastructure was affected.

Bottom Line

Anthropic accidentally showed its hand. Mythos and the Capybara tier represent a bet that there’s a market for AI models priced well above current Opus rates, justified by a meaningful capability jump. The cybersecurity angle adds urgency and a built-in narrative for their cautious rollout strategy.

But right now, Mythos is a press release that leaked early — nothing more. The benchmark claims are unverified. The pricing is undefined. The timeline is vague. What’s concrete is that cybersecurity stocks lost billions in market cap on the back of Anthropic’s own drafted marketing copy, and Anthropic itself got exposed by a checkbox. The gap between what AI companies build and how they operate their own infrastructure remains the most underreported story in the industry.