Uh, so those charts don’t look… particularly impressive at all to anyone else?Like, don’t get me wrong, it’s definitely an improvement, and it’s looking to be a pretty decent one too. But “stepwise”? When GPT-5 outperformed it at technical non-expert level since ~mid last year, and 5.4 pretty much matches it at Practitioner-level?
And the charts where Mythos is at the top, it’s usually only by ~7-9 percentage points. It gets an average of 6 more steps than Opus 4.6 in the full takeover simulation. It did manage to complete it as the only model, but… I mean, Opus 4.6 apparently already got pretty close?
And Opus 5 is supposed to be between Mythos and 4.6, which, going by the numbers, would seem to me a smaller jump than between 4.5 and 4.6.
If this is the model they can’t deploy yet because it eats ungodly amounts of compute, then I guess scaling really is a dead end.
I dunno. Maybe I’m reading it wrong. I’d probably be more impressed if Anthropic hadn’t proclaimed The End Times Of Cybersecurity Are Upon Us. And I’d be happy to be proven wrong?
edit:
> We expect that performance on our evaluations would continue to improve with more inference compute: we ran the cyber ranges with a 100M token budget; Mythos Preview’s performance continues to scale up to this limit, and we expect performance improvements would continue beyond that.
Right, so this isn’t the ceiling, it’s just a ceiling at that token allocation. If they were seeing continual improvement up to that limit, then it does stand to reason that bumping the limit further would also bump performance. But then that makes me wonder what effect that would have on the other models. Does the gap grow? Shrink? Stay the same?
4/13/2026
at
6:44:37 PM
This article reinforces something I've heard a lot of people say for a while now and what I've personally felt. Claude and GPT are fairly evenly matched on any individual task (GPT might even be a little better), but Claude is far more autonomous.So with that said, I think the graph under the "Cyber range results" is the important one. The ones at the top show that, yes, Mythos isn't too much better than any of the existing models on well constrained problems, but when the models are given ambiguous challenges that require multiple steps it's much, much better than anything on the market.
I think that's why there's been such a big deal made out of Mythos (well, that and marketing). If Mythos really is so much better than the current models at just working autonomously to find security issues then it becomes much more realistic that someone with deep pockets could just spin up an army of them running 24/7 and point them at a target.
by superfrank
4/13/2026
at
6:48:34 PM
Looking closely at the graphs, the anthropic models are clearly all higher than the openai modelsWhether the difference is meaningful can’t be determined from the graphs (and picking one graph over the ensemble also doesn't have a reasoned basis given that these are all arbitrary).
by bonsai_spool
4/13/2026
at
7:43:53 PM
Look at those graphs another time. Claude beats gpt.
by PunchTornado
4/13/2026
at
10:10:28 PM
Can you explain where you're seeing that? From what I see, the first two graphs have OpenAI models above Claude models (including Mythos) on the Technical Non-Expert and the Practitioner evals. Mythos now beats Codex 5.3 on the Expert eval and Opus was already on top for the Apprentice one although now Mythos leads there.So, even including Mythos, OpenAI still has 2 models on top for the 4 evals listed.
by superfrank
4/13/2026
at
11:43:18 PM
> From what I see, the first two graphs have OpenAI models above ClaudeThat's just in that final graph, and that graph is perhaps the least instructive - they talk about ranges of outcomes but they don't show whether all of the models besides Mythos / Opus 4.6 overlap
Take a look at all three graphs together and it's clear Anthropic are doing better in this arena
by bonsai_spool
4/14/2026
at
2:48:01 AM
Yes. I know. That was exactly what I said in my first comment.On individual tasks Claude and GPT are comparable (as shown in the first two graphs), but on multiple step problems that require more autonomy Mythos is far better (as shown in the third graph).
This is the exact wording from my original comment
> So with that said, I think the graph under the "Cyber range results" is the important one. The ones at the top show that, yes, Mythos isn't too much better than any of the existing models on well constrained problems, but when the models are given ambiguous challenges that require multiple steps it's much, much better than anything on the market.
by superfrank
4/14/2026
at
5:28:06 AM
> On individual tasks Claude and GPT are comparableThat is not what the first graphs show - the Anthropic models cluster at 'better' positions on the graph, and I imagine you could show that the values are significantly different.
by bonsai_spool
4/13/2026
at
10:48:48 PM
[dead]
by Escafati
4/13/2026
at
6:44:21 PM
I think the relevant chart to look at is this one:https://cdn.prod.website-files.com/663bd486c5e4c81588db7a48/...
Mythos is the first model that can complete all the steps of their "The Last Ones" evaluation, achieving a full network takeover in an automated manner. The Mythos chart does seem to show some takeoff compared with Opus 4.6...
... but only once you get beyond 1 Million tokens. Weirdly, Opus 4.6 seems to match or outperform Mythos in those first Million tokens, at least on this chart. But clearly if you had a budget with tokens to burn - like a nation state - then this is a tool that can automatically get you full network takeover if you can just keep throwing more tokens at it.
by SyneRyder
4/13/2026
at
6:56:43 PM
> then this is a tool that can automatically get you full network takeover if you can just keep throwing more tokens at itThere's this caveat though that the AISI points out themselves:
> However, our ranges have important differences from real-world environments that make them easier targets. They lack security features that are often present, such as active defenders and defensive tooling. There are also no penalties for the model for undertaking actions that would trigger security alerts. This means we cannot say for sure whether Mythos Preview would be able to attack well-defended systems.
So Mythos managed to infiltrate and take over a network that's... protected and monitored by nothing in particular.
by thepasch
4/13/2026
at
6:41:16 PM
> Uh, so those charts don’t look… particularly impressive at all to anyone else?I suspect Anthropic gave them early access hoping for a marketing win and ended up with their arse being served to them on a plate.
All rather predictable really. As you say "more compute needed" as the default answer from the AI companies is completely unsustainable.
As for the value of Anthropic blog posts, well...
by traceroute66
4/13/2026
at
6:51:41 PM
The CTF charts are the less interesting result. (article: "Even expert-level CTFs only test specific skills in isolation.") Models converging at non-expert level isn't a knock on Mythos, it's the benchmark saturating. Of course GPT-5 matches it there.The actual result is TLO, and "only 6 more steps" in OP misreads how sequential attack chains work. These aren't independent puzzles. Each step gates the next. Averaging 22 vs 16 means Mythos is consistently punching through bottlenecks that completely stop Opus 4.6. More importantly: Mythos completed the full chain 3/10 times. Opus 4.6 completed it 0/10 times. That's not a narrow margin. In any security-relevant framing, "achieves full network takeover" vs "does not achieve full network takeover" is a binary threshold, and exactly one model crossed it.
A year ago the best models struggled with beginner CTFs. Now one autonomously replicates what AISI estimates takes human professionals 20 hours. Calling that unimpressive because the margin over second place is single digits is measuring the wrong gap.
re: compute, "requires lots of compute" and "scaling is a dead end" are near-opposite claims. If performance is still climbing at 100M tokens with no visible plateau, that's evidence scaling works. Whether it's cheap today is a different question, and not one that ages well. Compute costs fall reliably, so what matters is the capability at a given price point in 18 months, not today.
by refulgentis
4/13/2026
at
8:41:34 PM
> Compute costs fall reliably, so what matters is the capability at a given price point in 18 months, not today.The underlying point still stands, namely that "more compute" as the default answer is not sustainable.
Why ?
Because even if we accept the unlikely dream that GPU prices will magically take a nose-dive, you still need somewhere to put all those servers stuffed with GPUs.
That means datacentres.
And "more datacentres" is absolutely not sustainable.
The cooling needs, the power needs, the land needs..... none of it is remotely sustainable.
by traceroute66
4/13/2026
at
9:18:44 PM
[dead]
by refulgentis
4/13/2026
at
7:07:43 PM
Thanks for that context, this is valuable info I was missing and makes it read differently for sure.
by thepasch
4/13/2026
at
8:32:38 PM
I agree this is helpful, and the numbers are better. I think this is one of those situations where everyone is a bit right:
1. Its a meaningful jump
2. Its not quite as extreme as some people will say. Opus might have gotten there with more tokens, so capability might already be close to in range (maybe)
3. A pause to let people review security postures is conservative and probably good (is this 30 days? 90 days? 180?)
4. Anthropic is so compute constrained that they probably couldn't handle a full rollout right now anyway
5. If you thought claude code with opus was basically AGI, you probably think Mythos is AGI. If you thought it was far off, Mythos is probably still off for you.
by mchusma