Discussion
<antirez>
4qwUz: While I fully agree with the headline I find it surprising that so many people implicitly claim familiarity with the aptly named "Mythos". Mythos is closed and currently has the status of an overhyped Anduril drone that failed contact with reality in Ukraine.If anyone has access to the mythical Mythos we'll see the contact with reality.
neutered_knot: It is also not proof of work because of asymmetries between attacker and defender. An attacker only needs to find one exploitable issue before the defender finds it and patches it, while the defender eventually needs to find all issues - and even then can't really be sure they remediated everything.The defender also not only has to discover issues but get them deployed. Installing patches takes time, and once the patch is available, the attacker can use it to reverse engineer the exploit and use it attack unpatched systems. This is happening in a matter of hours these days, and AI can accelerate this.It is also entirely possible that the defender will never create patches or users will never deploy patches to systems because it is not economically viable. Things like cheap IoT sensors can have vulnerabilities that don't get addressed because there is no profit in spending the tokens to find and fix flaws. Even if they were fixed, users might not know about patches or care to take the time to deploy them because they don't see it worth their time.Yes, there are many major systems that do have the resources to do reviews and fix problems and deploy patches. But there is an enormous installed base of code that is going to be vulnerable for a long time.
nottorp: Seriously. We need a BuSab for IT.This continous rush is not healthy. npm updates, replies to articles that barely made HN 12 hours ago, anything like that. It's not healthy.Slow down.
andersmurphy: > What happens is that weak models hallucinate (sometimes causally hitting a real problem)So the bigger models hallucinate better causally hitting more real problems?
baxtr: Interestingly enough: earlier today this discussion was trending: https://news.ycombinator.com/item?id=47769089 (Cybersecurity looks like proof of work now)
RugnirViking: the article here is pretty clearly a response to the one you posted
onionisafruit: It’s only clear if you know it exists, and now I know it exists thanks to gp.
egormakarov: > Different LLMs executions take different branches, but eventually the possible branches based on the code possible states are saturatedWith LLMs even the halting problem is just the question of paying for pro subscription!
redwood: What seems to be getting lost in the noise on this topic is that security has always been about security in depth and mitigating controls, in other words applied paranoia. There are always threat vectors and we're seeing a change in the shape of those vectors with more rapidity than ever before which is certainly exhausting for everyone. But don't forget the fundamentals here
dtech: The proof of halting being unsolvable usually uses a specific "adverserial" machine. In practice it's incredibly likely for the halt question to be answerable for any specific real life program.
alex_young: The whole framing is kind of uninteresting imo. If you spend more time researching code you can find more bugs to exploit / patch is not an earthshaking observation.Adding the words “by Claude” to it doesn’t materially change it. One could also pay a few humans to do the same thing. People have done that for decades.
rakejake: >> Test it yourself, GPT 120B OSS is cheap and available. BTW, this is why with this bug, the stronger the model you pick (but not enough to discover the true bug), the less likely it is it will claim there is a bug.I guess this is the crux of the debate. All the claims are comparing models that are available freely with a model that is available only to limited customers (Mythos). The problem here is with the phrase "better model". Better how? Is it trained specifically on cybersecurity? Is it simply a large model with a higher token/thinking budget? Is it a better harness/scaffold? Is it simply a better prompt?I don't doubt that some models are stronger that other models (a Gemini Pro or a Claude Opus has more parameters, higher context sizes and probably trained for longer and on more data than their smaller counterparts (Flash and Sonnet respectively).Unless we know the exact experimental setup (which in this case is impossible because Mythos is completely closed off and not even accessible via API), all of this is hand wavy. Anthropic is definitely not going to reveal their setup because whether or not there is any secret sauce, there is more value to letting people's imaginations fly and the marketing machine work. Anthropic must be jumping with joy at all the free publicity they are getting.
solenoid0937: Mythos isn't restricted for marketing purposes - that would be incredibly dumb because Anthropic would be giving up first mover advantage for next gen models.It's restricted because it's genuinely good at finding vulnerabilities, and employees felt that it's not a good idea to give this capability to everyone without letting defenders front-run.That's it. That's all there is to it. It is not some grand marketing play.
antirez: In the Anthropic Mythos model cards they explicitly remarked that they didn't want Mythos to be specifically good at security. They trained it to be good at coding, and as a side effect the model is (obviously) good at security. This what happens with flesh hackers too, mostly. Hackers are very good programmers, as a side effect they understand systems well enough that their understanding has security implications.
Hendrikto: Model cards are just marketing material. I wouldn’t trust them one bit.
2983592: But they are treated as holy scripture ...
2983592: How do you know? If you have access you are not unbiased, otherwise you cannot know by definition.AI companies routinely claim that something is too dangerous to release (I think GPT-2 was the first case) for marketing reasons. There are at least 10 documented high profile cases.They keep it secret because they now sell to the MIC with China and North Korea bullshit stories as well as to companies who are invested in the AI hype themselves.
pixl97: This is the weirdest take I've seen.It takes humans a very long time to learn how to code/find bugs. You just can't take any human and have them do it in a reasonable amount of time with a reasonable amount of money.Claude is effectively automation, once you have the hardware you can run as many copies of the model as you want. Factories can build hardware far faster then they can train more people.It's weird to see a denial of the industrial revolution on HN.
alex_young: A bit uncharitable no?I’m not denying that LLMs can be used to improve security research, suggesting that their use is wrong or anything like that.Humans have used software to research security for a long time. AI driven SAST is clearly going to help improve productivity.
Glemllksdf: If its really more expensive per token, it might have more parameters and is then able to hold more context/scope of code.Rumors say it has 10 trillion parameter vs. 1 trillion.
riteshkew1001: 'Calling AI vuln-finding 'hallucination plus luck' is generous, a lot of human pentesting fits the same description.
EGreg: This just proves that we should stop using old environments and operating systems for mission-critical work, and build a completely new environment from the ground up, that's secure by default. Instead of trying to fix leaky buckets.
antirez: You don't need to trust anyone. GPT 5.4 xhigh is available and you can test it for $20, to verify it is actually able to find complex bugs in old codebases. Do the work instead of denying AI can do certain things. It's a matter of an afternoon. Or, trust the people that did this work. See my YouTube video where I find tons of Redis bugs with GPT 5.4.
Glemllksdf: It reduces the cost significantly.A good security expert earns how much per year? And that person works 8/5.Now you can just throw money at it.CIA and co pay for sure more than 20k (thats what the anthropic red team stated as a cost for a complex exploit) for a zero day.If someone builds some framework around this, you can literaly copy and paste it, throw money at it and scale it. This is not possible with a human.
eikenberry: > It reduces the cost significantly.> Now you can just throw money at it.What happens when you throw enough money at it that it raises the cost significantly.
dwa3592: Fighting over analogies is kind of pointless imo, but if you want me to indulge, here is what I will ask: Do you consider breadth first search better or depth first search better? - the good answer is it depends on the search surface. The same way bugs, vulnerabilities are hiding somewhere on the surface or inside(exploiting dependencies) the surface of the software.In conclusion - Having a lot of tokens help! Having a better model also helps. Having both helps a lot. Having very intelligent humans + a lot of tokens + the best frontier models will help the most (emphasis on intelligent human).
kang: maybe a human knowledgeable in the domain (the training) is better than a smart liguist-programmer.
ramoz: > So, cyber security of tomorrow will not be like proof of work in the sense of "more GPU wins"; instead, better models, and faster access to such models, will wintomato, tomato
TZubiri: So kind of like how you would get nowhere by buying more gpus if there's already ASICs in play.
slopinthebag: > Don't trust who says that weak models can find the OpenBSD SACK bug. I tried it myself.This is exactly the argument AI skeptics make btw. Also you say you tried GPT 120B OSS, that's like me proclaiming LLM coding doesn't work because I tried putting gpt 3.5 in Claude Code. Try it with GLM 5, Qwen, etc. Or improve your harness :)
kang: The proof-of-work in ai(llm) 'can be' from the training side (not the inference side this blog explores) if a hashcash like 'proof' of model having being trained was defined. It should be possible to do so, since the very least measure of model having gotten smarter with some additional data, is that it will recognize/infer the said additional data correctly.
zahlman: > Hackers are very good programmersThis does not match my experience.
ang_cire: The missing part of their intended meaning is "skilled hackers". Unskilled hackers are everywhere, and they're bad at programming, but so are unskilled programmers.
drob518: Right, but what is interesting is that you can buy it off the rack for the price of tokens. You don’t have to do a specialist search for a security expert, pay a recruiter, hire them, wait for the specialist to start, pay them a signing bonus, pay them an expert-level salary, pay their social security taxes, healthcare benefits, and finally pay for an exit package when you lay them off because the project got canceled. You buy tokens when you need them and you stop buying when you don’t. This was the same dynamic that made cloud computing more interesting than company-owned servers in a company-owned data center. It’s more responsive to business needs and it falls under the development expense budget, not payroll, so you can do it even during hiring freezes.
tracker1: But, you do have to have at least an employee or contractor skilled enough to actually understand the scope of a given bug report from the agent in order to determine validity. I've seen plenty of legit bug reports by humans get dismissed because the reviewer didn't understand the material impact or how the bug/exploit worked.
drob518: Yep, sure. So, maybe you hire one and not three. The point is, it’s going to be fewer. Of course, all that assumes the AI is actually as good as a human, which I’m still skeptical of.
i_think_so: chef's kissLogged in just to show some love. +1 for the economics. +1 again (if I could) for the truth-to-power.We need a lot more of this kind of multi-disciplinary skepticism to counterbalance the industrial grade rockstar ninja 10x Kool-Aid drinking.
zozbot234: > It is also not proof of work because of asymmetries between attacker and defender. An attacker only needs to find one exploitable issue before the defender finds it and patches it, while the defender eventually needs to find all issues - and even then can't really be sure they remediated everything.It depends. Some classes of vulnerabilities can be excluded by construction. This is usually seen as too hard to be practicable, but AI potentially changes this.
rakejake: Yes, that does track with my personal experience. More context, more params and no quantization is probably it. But my hunch is that all the training data they've been getting in the past year also plays a part here. More than any other lab, anthropic's focus on coding right from the beginning gives them access to the best training data (several githubs worth). Most of this code comes with human feedback and anthropic even has data on how many went to production, got reverted etc. No need to pay for human labeling when your customers are doing it for you. This is their secret sauce.
dmix: That safety stuff is almost always quacks whose job it is to exaggerate LLMs at their non profits or marketing hype that "our models are so powerful you should fear them". Then they release them and the world moves on and adapts.Mythos will benefit security in the long run more than hackers, if it can do what they claim. And there's nothing that will stop an LLM like it from being released in the near term so it's very likely just resource constraints or marketing
qsort: A couple of alternative scenarios, although I'm not sure how much stock we should put in them:- what if at a certain level of capability you're essentially bug-free? I'm somewhat skeptical that this could be the case in a strong sense, because even if you formally prove certain properties, security often crucially depends on the threat model (e.g. side channel attacks, constant-time etc,) but maybe it becomes less of a problem in practice?- what if past a certain capability threshold weaker models can substitute for stronger ones if you're willing to burn tokens? To make an example with coding, GPT-3 couldn't code at all, so I'd rather have X tokens with say, GPT 5.4, than 100X tokens with GPT-3. But would I rather have X tokens with GPT 5.4 or 100X tokens with GPT 5.2? That's a bit murkier and I could see that you could have some kind of indifference curve.
Leomuck: Honestly, if every software project ran an AI-based security check over their code, the software world would probably be more secure. Of course, there are lots of projects who don't need that, having skilled people behind it, but we've seen many popular software projects (even by big companies) who didn't care at all. So even a basic model would find issues.Also, I find myself thinking more and more that the ability to pay for tokens is becoming crucial. And it's unfair. If you don't have money, you don't have access. Somehow, a worsening of class conflicts. If you know what I mean.
serial_dev: Not only that, even if you would like to pay, the best model providers could decide any day that they want to save on cost, so they nerf the responses. Then you shipping on time is at the mercy of these companies.If you spend months shipping slop, because “models will get better and tomorrow’s models can fix me today’s slop”, what happens when they not only do not get better, but actually get worse, and you are left with a bunch of slop you don’t understand and your problem solving muscles gotten weak?
byzantinegene: IMO this is the only way model providers can survive in the long run, bank on their users overreliance on them resulting in diminishing capabilities. This gives them leverage to increase prices without any pushback
serial_dev: You will know nothing and you will be happy.
thesuperevil: That’s interesting
4qskhaqj: Mythos is just used to get new business contacts:1) Create fear via the pro-American Axel Springer press (politico). Use UK/EU competition to make the EU jealous:https://www.politico.eu/article/anthropic-hacking-technology...2) Hype up the thing via clueless publications like the Guardian:https://www.theguardian.com/technology/2026/apr/17/finance-l..."As you would expect, the engagement I have had from UK CEOs in the last week has been significant."3) Sell the damm thing that finds 20 vulns in an NNTP over CORBA written in INTERCAL app to EU and UK companies.None of the people involved in "dealing with the threat" have the slightest clue. UK/EU always falls for the latest US hype and CEOs pay up.
So, cyber security of tomorrow will not be like proof of work in the sense of "more GPU wins"; instead, better models, and faster access to such models, will win.
csmantle: > So, cyber security of tomorrow will not be like proof of work in the sense of "more GPU wins"; instead, better models, and faster access to such models, will win.It's not proof of work, but proof of financial capacity.The big companies are turning the access to high-quality token generators (through their service) into means of production. We're all going direct to Utopia, we're all going direct the other way.
tptacek: There's no "proof" involved. That's the problem with the analogy. It's not about how much "financial capacity" you have. It's about how many bugs you find or fix. The bugs are there where the models help attackers/defenders or not.
Don't trust who says that weak models can find the OpenBSD SACK bug. I tried it myself. What happens is that weak models hallucinate (sometimes causally hitting a real problem) that there is a lack of validation of the start of the window (which is in theory harmless because of the start < end validation) and the integer overflow problem without understanding why they, if put together, create an issue. It's just pattern matching of bug classes on code that looks may have a problem, totally lacking the true ability to understand the issue and write an exploit. Test it yourself, GPT 120B OSS is cheap and available.
niea_11: I'm confused by the last part saying that if "weak" models (like gpt oss) find the openbsd bug they are just hallucinating. and also stronger models not finding it is because they dont hallucinate but are not strong enough.AISLE demonstrated in the last few weeks that small (weak per the author) models can find the openBSD bug (when pointed at the code). And apparently did several runs with the same results. Was gpt oss hallucinating on all those runs?And what separates a strong model from a weak one? Is qwen3.5 27b weak?Don't trust who says that weak models can find the OpenBSD SACK bug. I tried it myself. What happens is that weak models hallucinate (sometimes causally hitting a real problem) that there is a lack of validation of the start of the window (which is in theory harmless because of the start < end validation) and the integer overflow problem without understanding why they, if put together, create an issue. It's just pattern matching of bug classes on code that looks may have a problem, totally lacking the true ability to understand the issue and write an exploit. Test it yourself, GPT 120B OSS is cheap and available.BTW, this is why with this bug, the stronger the model you pick (but not enough to discover the true bug), the less likely it is it will claim there is a bug. Stronger models hallucinate less, so they can't see the problem in any side of the spectrum: the hallucination side of small models, and the real understanding side of Mythos