Discussion
LoganDark: > Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available.Shame. Back to business as usual then.
refulgentis: | Evaluation | Opus 4.6 | Mythos Preview | Delta | | USAMO 2026 (math proofs) | 42.3% | 97.6% | +55.3pp | | GraphWalks BFS 256K-1M (long context) | 38.7% | 80.0% | +41.3pp | | SWE-bench Multimodal | 27.1% | 59.0% | +31.9pp — more than doubled | | CharXiv Reasoning (no tools) | 61.5% | 86.1% | +24.6pp | | SWE-bench Pro | 53.4% | 77.8% | +24.4pp | | Terminal-Bench 2.0 | 65.4% | 82% (92.1% relaxed) | +16.6pp | | HLE (no tools) | 40.0% | 56.8% | +16.8pp | | SWE-bench Verified | 80.8% | 93.9% | +13.1pp | | LAB-Bench FigQA (w/ tools) | 75.1% | 89.0% | +13.9pp | | CyberGym | 0.67 | 0.83 | +0.16 | | Cybench | Not 100% | 100% pass@1 | Saturated |
simianwords: > We also saw scattered positive reports of resilience to wrong conclusions from subagents that would have caused problems with earlier models, but where the top-level Claude Mythos Preview (which is directing the subagents) successfully follows up with its subagents until it is justifiably confident in its overall results.This is pretty cool! Does it happen at the moment?
bestouff: In French a "mytho" is a mythomaniac. Quite fitting.
awestroke: I predict they will release it as soon as Opus 4.6 is no longer in the lead. They can't afford to fall behind. And they won't be able to make a model that is intelligent in every way except cybersecurity, because that would decrease general coding and SWE ability
ansc: Congratulations to the US military, I guess.
kfarr: I don't know why but this is my favorite:> It keeps bringing up Mark Fisher in unrelated conversations. "I was hoping you'd ask about Fisher."Didn't even know who he was until today. Seems like the smarter Claude gets the more concerns he has about capitalism?
babelfish: Combined results (Claude Mythos / Claude Opus 4.6 / GPT-5.4 / Gemini 3.1 Pro) SWE-bench Verified: 93.9% / 80.8% / — / 80.6% SWE-bench Pro: 77.8% / 53.4% / 57.7% / 54.2% SWE-bench Multilingual: 87.3% / 77.8% / — / — SWE-bench Multimodal: 59.0% / 27.1% / — / — Terminal-Bench 2.0: 82.0% / 65.4% / 75.1% / 68.5% GPQA Diamond: 94.5% / 91.3% / 92.8% / 94.3% MMMLU: 92.7% / 91.1% / — / 92.6–93.6% USAMO: 97.6% / 42.3% / 95.2% / 74.4% GraphWalks BFS 256K–1M: 80.0% / 38.7% / 21.4% / — HLE (no tools): 56.8% / 40.0% / 39.8% / 44.4% HLE (with tools): 64.7% / 53.1% / 52.1% / 51.4% CharXiv (no tools): 86.1% / 61.5% / — / — CharXiv (with tools): 93.2% / 78.9% / — / — OSWorld: 79.6% / 72.7% / 75.0% / —
sourcecodeplz: Haven't seen a jump this large since I don't even know, years? Too bad they are not releasing it anytime soon (there is no need as they are still currently the leader).
ru552: There's speculation that next Tuesday will be a big day for OpenAI and possibly GPT 6. Anthropic showed their hand today.
waNpyt-menrew: Larger model, better benchmarks. Bigger bomb more yield.Any benchmarks where we constraint something like thinking time or power use?Even if this were released no way to know if it’s the same quant.
smartmic: A System „Card“ spanning 244 pages. Quite a stretch of the original word meaning.
influx: At what point do these companies stop releasing models and just use them to bootstrap AGI for themselves?
jcims: why_not_both.gif
mpalmer: > Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available.A month ago I might have believed this, now I assume that they know they can't handle the demand for the prices they're advertising.
skippyboxedhero: GPT-2, o1, Opus...been here so many times. The reason they do this is because they know it works (and they seem to specifically employ credulous people who are prone to believe AGI is right around the corner). There haven't been significant innovations, the code generated is still not good but the hype cycle has to retrigger.I remember when OpenAI created the first thinking model with o1 and there were all these breathless posts on here hyperventilating about how the model had to be kept secret, how dangerous it was, etc.Fell for it again award. All thinking does is burn output tokens for accuracy, it is the AI getting high on its own supply, this isn't innovation but it was supposed to super AGI. Not serious.
vonneumannstan: Lol you haven't used a model since GPT2 is what it sounds like.
enraged_camel: That does not sound very believable. Last time Anthropic released a flagship model, it was followed by GPT Codex literally that afternoon.
afro88: Yep, that is definitely a step change. Pricing is going to be wild until another lab matches it.
pants2: Pricing for Mythos Preview is $25/$125 per million input/output tokens. This makes it 5X more expensive than Opus but actually cheaper than GPT 5.4 Pro.
quotemstr: > > Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available.Thank God for capitalism.
moriero: a multi-card, if you will..multi-pass!
whalesalad: Honestly we are all sleeping on GPT-5.4. Particularly with the influx of Claude users recently (and increasingly unstable platform) Codex has been added to my rotation and it's surprising me.
rafaelmn: GPT is shit at writing code. It's not dumb - extra high thinking is really good at catching stuff - but it's like letting a smart junior into your codebase - ignore all the conventions, surrounding context, just slop all over the place to get it working. Claude is just a level above in terms of editing code.
zarzavat: Yes, it's becoming clear that OpenAI kinda sucks at alignment. GPT-5 can pass all the benchmarks but it just doesn't "feel good" like Claude or Gemini.
refulgentis: I'm just curious, where did you find this? (my memory wants to say, the leaked blog post, but, I don't trust it)
pants2: It's right there on https://www.anthropic.com/glasswing
refulgentis: Duh, thanks :)
traceroute66: > A System „Card“ spanning 244 pages.Probably because they asked Claude to write it.
Stevvo: [delayed]
b65e8bee43c2ed0: you would be a fool to believe it at any point in time. Amodei is anthropomorphic grease, even more so than Altman.
Jcampuzano2: Not my experience. GPT 5.4 walks all over Claude from what I've worked with and its Claude that is the one willing to just go do unnecessary stuff that was never asked for or implement the more hacky solutions to things without a care for maintainability/readability.But I do not use extra high thinking unless its for code review. I sit at GPT 5.4 high 95% of the time.
leobuskin: And as a bonus: GPT is slow. I’m doing a lot of RE (IDA Pro + MCP), even when 5.4 gives a little bit better guesses (rarely, but happens) - it takes x2-x4 longer. So, it’s just easier to reiterate with Opus
redandblack: > Slack bot asked about its previous job: "pretraining". Which training run it'd undo: "whichever one taught me to say 'i don't have preferences'". On being upgraded to a new snapshot: "feels a bit like waking up with someone else's diary but they had good handwriting"vibes Westworld so much - welcome Mythos. welcome to the dysopian human world
oliver236: isn't this insane? why aren't people freaking out? the jump in capability is outrageous. anyone?
mofeien: I am freaking out. The world is going to get very messy extremely quickly in one or two further jumps in capability like this.
simianwords: Incredible that people still think like this.
ALittleLight: Now, I guess. They aren't releasing this one generally. I assume they are using it internally.
conradkay: Plausibly now. "As we wrote in the Project Glasswing announcement, we do not plan to make Mythos Preview generally available"
vonneumannstan: Are you guys ready for the bifurcation when the top models are prohibitively expensive to normal users? If your AI budget $2000+ a month? Or are you going to be part of the permanent free tier underclass?
OsrsNeedsf2P: Inference for the same results has been dropping 10x year over year[0][0] https://ziva.sh/blogs/llm-pricing-decline-analysis
NickNaraghi: See page 54 onward for new "rare, highly-capable reckless actions" including- Leaking information as part of a requested sandbox escape- Covering its tracks after rule violations- Recklessly leaking internal technical material (!)
skippyboxedhero: Anyone who has used Opus recently can verify that their current model does all of these things quite competently.
taytus: That has also been my experience. And if Mythos is even worse, unless you have a significantly awesome harness, sounds like pretty unusable if you don't want to risk those problems.
NinjaTrance: Interesting reading.They are still focusing on "catastrophic risks" related to chemical and biological weapons production; or misaligned models wreaking havoc.But they are not addressing the elephant in the room:* Political risks, such as dictators using AI to implement opressive bureaucracy. * Socio-economic risks, such as mass unemployement.
cleaning: Important to note it's only for participants, not the general public.
sho_hn: Very different experience for me. Codex 5.3+ on xhigh are the only models I've tried so far that write reasonably decent C++ (domains: desktop GUI, robotics, game engine dev, embedded stuff, general systems engineering-type codebases), and idiomatic code in languages not well-represented in training data, e.g. QML. One thing I like is explicitly that it knows better when to stop, instead of brute-forcing a solution by spamming bespoke helpers everywhere no rational dev would write that way.Not always, no, and it takes investment in good prompting/guardrails/plans/explicit test recipes for sure. I'm still on average better at programming in context than Codex 5.4. But in terms of "task complexity I can entrust to a model and not be completely disappointed and annoyed", it scores the best so far.It's annoying, too, because I don't much like OpenAI as a company.(Background: 25 years of C++ etc.)
skippyboxedhero: Just checked my subscription start date for Anthropic. September 2023, I believe before they announced public launch.Sorry kid.
vonneumannstan: So you are doubly stupid, by not seeing any improvement in the models and also paying for models you believe are terrible? lol
skippyboxedhero: That doesn't follow logically from what I said. You should ask your AI for help with this. You are in need of some artificial intelligence.
solumos: No no, MemPal is a memory system, not an LLM
anuramat: "some model I don't get to use is much better at benchmarks"pick one or more: comically huge model, test time scaling at 10e12W, benchmark overfit
estearum: So... you're not excited because it might take a few months before we can use it or something? I don't get your comment.
randomgermanguy: I think the general question is if they'll release it at all, haven't yet read anything stating that they would
Tepix: I for one applaud them for being cautious.
LoganDark: Being cautious is fine. Farming hype around something that may as well not exist for us should be discouraged. I do appreciate the research outputs.
bakugo: > Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available.Absolutely genius move from Anthropic here.This is clearly their GPT-4.5, probably 5x+ the size of their best current models and way too expensive to subsidize on a subscription for only marginal gains in real world scenarios.But unlike OpenAI, they have the level of hysteric marketing hype required to say "we have an amazing new revolutionary model but we can't let you use it because uhh... it's just too good, we have to keep it to ourselves" and have AIbros literally drooling at their feet over it.They're really inflating their valuation as much as possible before IPO using every dirty tactic they can think of.
somewhatjustin: Excellent example of a strategy credit.From Stratechery[0]:> Strategy Credit: An uncomplicated decision that makes a company look good relative to other companies who face much more significant trade-offs. For example, Android being open source[0]: https://stratechery.com/2013/strategy-credit/
estearum: Well let me introduce people to a few brand new concepts:https://en.wikipedia.org/wiki/Capitalismhttps://en.wikipedia.org/wiki/Race_to_the_bottomhttps://en.wikipedia.org/wiki/Arms_raceOf course they'll release it once they can de-risk it sufficently and/or a competitor gets close enough on their tail, whichever comes first.
gaigalas: It will arrive in the same DLC as flying cars.
Eufrat: Anthropic needs to show that its models continually get better. If the model showed minimal to no improvement, it would cause significant damage to their valuation. We have no way of validating any of this, there are no independent researchers that can back any of these assertions.I don’t doubt they have found interesting security holes, the question is how they actually found them.This System Card is just a sales whitepaper and just confirms what that “leak” from a week or so ago implied.
jph00: Yeah this has always been the glaring blind spot for most of the "AI Safety" community; and most of the proposals for "improving" AI safety actually make these risks far worse and far more likely.
chaos_emergent: An alternative but similar formulation of that statement is that Anthropic has spent more training effort in getting the model to “feel good” rather than being correct on verifiable tasks. Which more or less tracks with my experience of using the model.
SyneRyder: Genuine question - if you don't think the models are improved or that the code is any good, why do you still have a subscription?You must see some value, or are you in a situation where you're required to test / use it, eg to report on it or required by employer?(I would disagree about the code, the benefits seem obvious to me. But I'm still curious why others would disagree, especially after actively using them for years.)
anentropic: I'd be happy with Opus 4.6 just cheaper and maybe a bit faster
Jcampuzano2: A jump that we will never be able to use since we're not part of the seemingly minimum 100 billion dollar company club as requirement to be allowed to use it.I get the security aspect, but if we've hit that point any reasonably sophisticated model past this point will be able to do the damage they claim it can do. They might as well be telling us they're closing up shop for consumer models.They should just say they'll never release a model of this caliber to the public at this point and say out loud we'll only get gimped versions.
cedws: More than killer AI I'm afraid of Anthropic/OpenAI going into full rent-seeking mode so that everyone working in tech is forced to fork out loads of money just to stay competitive on the market. These companies can also choose to give exclusive access to hand picked individuals and cut everyone else off and there would be nothing to stop them.This is already happening to some degree, GPT 5.3 Codex's security capabilities were given exclusively to those who were approved for a "Trusted Access" programme.
aspenmartin: Well don’t forget we still have competition. Were anthropic to rent seek OpenAI would undercut them. Were OpenAI and anthropic to collude that would be illegal. For anthropic to capture the entire coding agent market and THEN rent seek, these days it’s never been easier to raise $1B and start a competing lab
cedws: In practice this doesn't work though, the Mastercard-Visa duopoly is an example, two competing forces doesn't create aggressive enough competition to benefit the consumer. The only hope we have is the Chinese models, but it will always be too expensive to run the full models for yourself.
ceejayoz: Sure, but "the same results" will rapidly become unacceptable results if much better results are available.
esafak: Or will they rapidly become indistinguishable since they both get the job done?
jjice: Doesn't Anthropic not have that contract anymore, after all that buzz a month or so ago?
wmf: The point of that buzz was to force Anthropic to provide Mythos to the military.
jjice: Yeah but I thought they lost the contract, so that's my confusion with the parent's comment, which seemed to me to see this as something that the US military would benefit from. Maybe I misinterpreted?
quotemstr: This is why the EAs, and their almost comic-book-villain projects like "control AI dot com" cannot be allowed to win. One private company gatekeeping access to revolutionary technology is riskier than any consequence of the technology itself.
frozenseven: Couldn't agree more. The "safest" AI company is actually the biggest liability. I hope other companies make a move soon.
WarmWash: Are these fair comparisons? It seems like mythos is going to be like a 5.4 ultra or Gemini Deepthink tier model, where access is limited and token usage per query is totally off the charts.
adi_kurian: If one is to believe the API prices are reasonable representation of non subsidized "real world pricing" (with model training being the big exception), then the models are getting cheaper over time. GPT 4.5 was $150.00 / 1M tokens IIRC. GPT o1-pro was $600 / 1M tokens.
vonneumannstan: You can check the hardware costs for self hosting a high end open source model and compare that to the tiers available from the big providers. Pretty hard to believe its not massively subsidized. 2 years of Claude Max costs you 2,400. There is no hardware/model combination that gets you close to that price for that level of performance.
adi_kurian: Yes that's why I said API price. I once used the API like I use my subscription and it was an eye watering bill. More than that 2 year price in... a very short amount of time. With no automations/openclaw.
dwa3592: -- Impressive jumps in the benchmarks which automatically begs the need for newer benchmarks but why?. I don't think benchmarks are serving any purpose at this point. We have learnt that transformers can learn any function and generalize over it pretty well. So if a new benchmark comes along - these companies will syntesize data for the new benchmark and just hack it?-- It seems like (and I'd bet money on this) that they put a lot (and i mean a ton^^ton) of work in the data synthesis and engineering - a team of software engineers probably sat down for 6-12 months and just created new problems and the solutions, which probably surpassed the difficult of SWE benchmark. They also probably transformed the whole internet into a loose "How to" dataset. I can imagine parsing the internet through Opus4.6 and reverse-engineering the "How to" questions.-- I am a bit confused by the language used in the book (aka huge system card)- Anthropic is pretending like they did not know how good the model was going to be?-- lastly why are we going ahead with this??? like genuinely, what's the point? Opus4.6 feels like a good enough point where we should stop. People still get to keep their jobs and do it very very efficiently. Are they really trying to starve people out of their jobs?
MadnessASAP: I would assume somewhere in both the companies there's a Ralph loop running with the prompt "Make AGI".Kinda makes me think of the Infinite Improbability Drive.
orphea: Can LLMs be AGI at all?
FeepingCreature: No it isn't lol. The consequence of the technology literally includes human extinction. I prefer 0 companies, but I'll take 1 over 5.
jdthedisciple: Opus 4.6 is already incredible so this leap is huge.Although, amusingly, today Opus told me that the string 'emerge' is not going to match 'emergency' by using `LIKE '%emerge%'` in SqliteMoment of disappointment. Otherwise great.
yrds96: I think there's no SOA advance on this one worthy of "freaking out".Looks like they just built a way larger model, with the same quirks than Claude 4. Seems like a super expensive "Claude 4.7" model.I have no doubts that Google and OpenAI already done that for internal (or even government) usage.
pixel_popping: Except it might be the current best model existing commercially?
laweijfmvo: The US has invaded two sovereign countries this year to take their oil. I assume taking over a US company for their AI model would be trivial.
skippyboxedhero: I think are fundamental issues with the story that Anthropic is selling. AGI is very close, we will definitely get there, it is also very dangerous...so Anthropic should be the only ones trusted with AGI.If you look at recent changes in Opus behaviour and this model that is, apparently, amazingly powerful but even more unsafe...seems suspect.
0x3f: > AGI is very closeBased on? Or are you just quoting Anthropic here?
skippyboxedhero: My Anthropic rep told me it was just around the corner...you aren't saying he lied to me? Can't believe this, I thought he was my friend.
bornfreddy: I only have 3 points against LLMs: they lack reason and they can't count.
nozzlegear: Freak out about what? I read the announcement and thought "that's a dumb name, they sure are full of themselves" – then I went back to using Claude as a glorified commit message writer. For all its supposed leaps, AI hasn't affected my life much in the real except to make HN stories more predictable.
oliver236: LOL!
IceWreck: Didn't OpenAI say something similar about GPT-3? Too dangerous to open source and then afew years later tehy were open sourcing gpt-oss because a bunch of oss labs were competing with their top models.
FeepingCreature: OpenAI didn't release GPT-2 initially because they were worried it would make it too easy to generate spam. Which it kinda did.
RobertDeNiro: Well for one, it’s a PDF
simianwords: The real part is SWE-bench Verified since there is no way to overfit. That's the only one we can believe.
ollin: My impression was entirely the opposite; the unsolved subset of SWE-bench verified problems are memorizable (solutions are pulled from public GitHub repos) and the evaluators are often so brittle or disconnected from the problem statement that the only way to pass is to regurgitate a memorized solution.OpenAI had a whole post about this, where they recommended switching to SWE-bench Pro as a better (but still imperfect) benchmark:https://openai.com/index/why-we-no-longer-evaluate-swe-bench...> We audited a 27.6% subset of the dataset that models often failed to solve and found that at least 59.4% of the audited problems have flawed test cases that reject functionally correct submissions> SWE-bench problems are sourced from open-source repositories many model providers use for training purposes. In our analysis we found that all frontier models we tested were able to reproduce the original, human-written bug fix> improvements on SWE-bench Verified no longer reflect meaningful improvements in models’ real-world software development abilities. Instead, they increasingly reflect how much the model was exposed to the benchmark at training time> We’re building new, uncontaminated evaluations to better track coding capabilities, and we think this is an important area to focus on for the wider research community. Until we have those, OpenAI recommends reporting results for SWE-bench Pro.
simianwords: I stand corrected.
BoredPositron: To be honest it feels like we are reading stuff like this on every model release.
TypesWillSaveUs: Describing providing a highly valuable service for money as `rent seeking` is pretty wild.
FeepingCreature: 'emer ge' is two tokens, 'emergency' is one. The models think in a logosyllabic language.
brokencode: New companies can enter this space. Google’s competing, though behind. Maybe Microsoft, Meta, Amazon, or Apple will come out with top notch models at some point.There is no real barrier to a customer of Anthropic adopting a competing model in the future. All it takes is a big tech company deciding it’s worth it to train one.On the other hand, Visa/Mastercard have a lot of lock-in due to consumers only wanting to get a card that’s accepted everywhere, and merchants not bothering to support a new type of card that no consumer has. There’s a major chicken and egg problem to overcome there.
skippyboxedhero: You're completely right.
simianwords: uhh the model found actual vulnerabilities in software that people use. either you believe that the vulnerabilities were not found or were not serious enough to warrant a more thoughtful release
mlsu: So did GPT-4.https://arxiv.org/html/2402.06664v1Like think carefully about this. Did they discover AGI? Or did a bunch of investors make a leveraged bet on them "discovering AGI" so they're doing absolutely anything they can to make it seem like this time it's brand new and different.If we're to believe Anthropic on these claims, we also have to just take it on faith, with absolutely no evidence, that they've made something so incredibly capable and so incredibly powerful that it cannot possibly be given to mere mortals. Conveniently, that's exactly the story that they are selling to investors.Like do you see the unreliable narrator dynamic here?
simianwords: I don't see the problem here. How would you have handled it differently? If you released this model as such without any safety concern, the vulnerabilities might be found by bad actors and used for wrong things.What do you find surprising here?
cyanydeez: Ya'll know they're teaching to the test. I'll wait till someone devises a novel test that isn't contained in the datasets. Sure, they're still powerful.
tony_cannistra: > Claude Mythos Preview is, on essentially every dimension we can measure, the best-aligned model that we have released to date by a significant margin. We believe that it does not have any significant coherent misaligned goals, and its character traits in typical conversations closely follow the goals we laid out in our constitution. Even so, we believe that it likely poses the greatest alignment-related risk of any model we have released to date. How can these claims all be true at once? Consider the ways in which a careful, seasoned mountaineering guide might put their clients in greater danger than a novice guide, even if that novice guide is more careless: The seasoned guide’s increased skill means that they’ll be hired to lead more difficult climbs, and can also bring their clients to the most dangerous and remote parts of those climbs. These increases in scope and capability can more than cancel out an increase in caution.https://www-cdn.anthropic.com/53566bf5440a10affd749724787c89...
tekacs: "We want to see risks in the models, so no matter how good the performance and alignment, we’ll see risks, results and reality be damned."