Discussion
Hardening Firefox with Anthropic’s Red Team
fcpk: The fact there is no mention of what were the bugs is a little odd. It'd really be nice to see if this is a "weird never happening edge case" or actual issues. LLMs have uncanny abilities to identify failure patterns that it has seen before, but they are not necessarily meaningful.
jandem: Here's a write-up for one of the bugs they found: https://red.anthropic.com/2026/exploit/
righthand: How much un-hardening of software and introducing vulnerabilities will occur once Anthropic joins the Department of Defense? LLMs should be good for fuzzy testing the implementation of vulns too right?
iosifache: You can find them linked [1] in the OG article from Anthropic [2].[1] https://www.mozilla.org/en-US/security/advisories/mfsa2026-1... [2] https://www.anthropic.com/news/mozilla-firefox-security
deafpolygon: I’m guessing it might be some of these: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...
muizelaar: Yeah, the ones reported by Evyatar Ben Asher et al.
stuxf: It's interesting that they counted these as security vulnerabilities (from the linked Anthropic article)> “Crude” is an important caveat here. The exploits Claude wrote only worked on our testing environment, which intentionally removed some of the security features found in modern browsers. This includes, most importantly, the sandbox, the purpose of which is to reduce the impact of these types of vulnerabilities. Thus, Firefox’s “defense in depth” would have been effective at mitigating these particular exploits.
kingkilr: [Work at Anthropic, used to work at Mozilla.]Firefox has never required a full chain exploit in order to consider something a vulnerability. A large proportion of disclosed Firefox vulnerabilities are vulnerabilities in the sandboxed process.If you look at Firefox's Security Severity Rating doc: https://wiki.mozilla.org/Security_Severity_Ratings/Client what you'll see is that vulnerabilities within the sandbox, and sandbox escapes, are both independently considered vulnerabilities. Chrome considers vulnerabilities in a similar manner.
staticassertion: I've had mixed results. I find that agents can be great for:1. Producing new tests to increase coverage. Migrating you to property testing. Setting up fuzzing. Setting up more static analysis tooling. All of that would normally take "time" but now it's a background task.2. They can find some vulnerabilities. They are "okay" at this, but if you are willing to burn tokens then it's fine.3. They are absolutely wrong sometimes about something being safe. I have had Claude very explicitly state that a security boundary existed when it didn't. That is, it appeared to exist in the same way that a chroot appears to confine, and it was intended to be a security boundary, but it was not a sufficient boundary whatsoever. Multiple models not only identified the boundary and stated it exists but referred to it as "extremely safe" or other such things. This has happened to me a number of times and it required a lot of nudging for it to see the problems.4. They often seem to do better with "local" bugs. Often something that has the very obvious pattern of an unsafe thing. Sort of like "that's a pointer deref" or "that's an array access" or "that's `unsafe {}`" etc. They do far, far worse the less "local" a vulnerability is. Product features that interact in unsafe ways when combined, that's something I have yet to have an AI be able to pick up on. This is unsurprising - if we trivialize agents as "pattern matchers", well, spotting some unsafe patterns and then validating the known properties of that pattern to validate is not so surprising, but "your product has multiple completely unrelated features, bugs, and deployment properties, which all combine into a vulnerability" is not something they'll notice easily.It's important to remain skeptical of safety claims by models. Finding vulns is huge, but you need to be able to spot the mistakes.
lloydatkinson: Anthropic feels like they are flailing around constantly trying to find something to do. A C compiler that didn't work, a browser that didn't work, and now solving bugs in Firefox.
manbash: I think it's a nice break from vibe-coding. It feels like a good direction in terms of use cases for LLM.
robin_reala: I correctly misread that as “et AI”.
mentalgear: That's one good use of LLMs: fuzzy testing / attack.
nz: Not contradicting this (I am sure it's true), but why is using an LLM for this qualitatively better than using an actual fuzzer?
gehsty: This makes sense - they are demonstrating the capability of their core product by doing so? They dont make browsers, c compilers, they sell ai + dev tools.
jdiff: Seems like a poor advertisement for their product if their shining example of utility is a broken compiler that doesn't function as the README indicates.
driverdan: Anthropic's write up[1] is how all AI companies should discuss their product. No hype, honest about what went well and what didn't. They highlighted areas of improvement too.1: https://www.anthropic.com/news/mozilla-firefox-security
g947o: > Firefox was not selected at random. It was chosen because it is a widely deployed and deeply scrutinized open source project — an ideal proving ground for a new class of defensive tools.What I was thinking was, "Chromium team is definitely not going to collaborate with us because they have Gemini, while Safari belongs to a company that operates in a notoriously secretive way when it comes to product development."
vorticalbox: its just a different attack surface for safari they would need to blackbox attack the browser which is much harder than what they did her
tclancy: Part of that caught my eye. As yet another person who’s built a half-ass system of AI agents running overnight doing stuff, one thing I’ve tasked Claude with doing (in addition to writing tests, etc) is using formal verification when possible to verify solutions. It reads like that may be what Anthropic is doing in part.And this is a good reminder for me to add a prompt about property testing being preferred over straight unit tests and maybe to create a prompt for fuzz testing the code when we hit Ready state.
bell-cot: If only this attitude was more common. All security is, ultimately, multi-ply Swiss cheese and unknown unknowns. In that environment, patching holes in your cheese layers is a critical part of statistical quality control.
HekaH: A new analysis tool finds 18 bugs in a huge code base. That is not a lot, but apparently enough to write pages of corporate drivel that praises the tool to the skies.Mozilla wants AI money.
amelius: Perhaps I missed it but I don't see any false positives mentioned.
mozdeco: [working for Mozilla]That's because there were none. All bugs came with verifiable testcases (crash tests) that crashed the browser or the JS shell.For the JS shell, similar to fuzzing, a small fraction of these bugs were bugs in the shell itself (i.e. testing only) - but according to our fuzzing guidelines, these are not false positives and they will also be fixed.
Analemma_: It's important to fix vulnerabilities even if they are blocked by the sandbox, because attackers stockpile partial 0-days in the hopes of using them in case a complementary exploit is found later. i.e. a sandbox escape doesn't help you on its own, but it's remotely possible someone was using one in combination with one of these fixed bugs and has now been thwarted. I consider this a straightforward success for security triage and fixing.
simonw: What was Anthropic's "browser that didn't work"?
devin: Can you give me an example (real or imagined) where you're dipping into a bit of light formal verification?I don't think the problems I work on require the weight of formal verification, but I'm open to being wrong.
tclancy: To be clear, almost (all?) of mine do not either and it's partially due to the fact I have been really interested in formal methods thanks to Hillel Wayne, but I don't seem to have the math background for them. To the man who has seen a fancy new hammer but cannot afford it, every problem looks like a nail.The origin of it is a hypothesis I can get better quality code out of agents by making them do the things I don't (or don't always). So rather than quitting at ~80% code coverage, I am asking it to cover closer to 95%. There's a code complexity gate that I require better grades on than I would for myself because I didn't write this code, so I can't say "Eh, I know how it works inside and out". And I keep adding little bits like that.I think the agents have only used it 2 or 3 times. The one that springs to mind is a site I am "working" on where you can only post once a day. In addition, there's an exponential backoff system for bans to fight griefers. If you look at them at the same time, they're the same idea for different reasons, "User X should not be able to post again until [timestamp]" and there's a set of a dozen or so formal method proofs done in z3 to check the work that can be referenced (I think? god this all feels dumb and sloppy typed out) at checkpoints to ensure things have not broken the promises.
ferguess_k: However, the shape is there. And no one knows how good the thing is going to be after X months. We are measuring months here, not even years.I believe there is a theoretical cap about the capability of LLM. I'm wondering what does it look like.
utopiah: I think they meant Cursor, cf https://news.ycombinator.com/item?id=46646777
amelius: Sounds good.Did you also test on old source code, to see if it could find the vulnerabilities that were already discovered by humans?
ycombinete: [delayed]
rs_rs_rs_rs_rs: What? The js engine in Safari is open source, they can put Claude to work on it any time they want.
runjake: Here's a rough break down, formatted best I can for HN: Safari (closed source) ├─ UI / tabs / preferences ├─ macOS / iOS integration └─ WebKit framework (open source) ~60% ├─ WebCore (HTML/CSS/DOM) ├─ JavaScriptCore (JS engine) └─ Web Inspector
shevy-java: I guess it is good when bugs are fixed, but are these real bugs or contrived ones? Is anyone doing quality assessment of the bugs here?I think it was curl that closed its bug bounty program due to AI spam.
mozdeco: The bugs are at least of the same quality as our internal fuzzing bugs. They are either crashes or assertion failures, both of these are considered bugs by us. But they have of course a varying value. Not every single assertion failure is ultimately a high impact bug, some of these don't have an impact on the user at all - the same applies to fuzzing bugs though, there is really no difference here. And ultimately we want to fix all of these because assertions have the potential to find very complex bugs, but only if you keep your software "clean" wrt to assertion failures.The curl situation was completely different because as far as I know, these bugs were not filed with actual testcases. They were purely static bugs and those kinds of reports eat up a lot of valuable resources in order to validate.
est31: I suppose eventually we'll see something like Google's OSS-Fuzz for core open source projects, maybe replacing bug bounty programs a bit. Anthropic already hands out Claude access for free to OSS maintainers.LLMs made it harder to run bug bounty programs where anyone can submit stuff, and where a lot of people flooded them with seemingly well-written but ultimately wrong reports.On the other hand, the newest generation of these LLMs (in their top configuration) finally understands the problem domain well enough to identify legitimate issues.I think a lot of judging of LLMs happens on the free and cheaper tiers, and quality on those tiers is indeed bad. If you set up a bug bounty program, you'll necessarily get bad quality reports (as cost of submission is 0 usually).On the other hand, if instead of a bug bounty program you have an "top tier LLM bug searching program", then then the quality bar can be ensured, and maintainers will be getting high quality reports.Maybe one can save bug bounty programs by requiring a fee to be paid, idk, or by using LLM there, too.
mccr8: Google already has an AI-powered security vulnerability project, called Big Sleep. It has reported a number of issues to open source projects: https://issuetracker.google.com/savedsearches/7155917?pli=1
g947o: Apple is not the kind of company that typically does these things, even if the entire Safari is open source.
azakai: 1. This is a kind of fuzzer. In general it's just great to have many different fuzzers that work in different ways, to get more coverage.2. I wouldn't say LLMs are "better" than other fuzzers. Someone would need to measure findings/cost for that. But many LLMs do work at a higher level than most fuzzers, as they can generate plausible-looking source code.
pjmlp: Indeed, without it looks like a fluffy marketing piece.
tptacek: And now that you know that it isn't, do you feel differently about the logic you used to write this comment?
john_strinlai: i am curious, what are you hoping to get out of this comment? will you feel better if they say yes? what is your plan if they say no?
JumpCrisscross: > what are you hoping to get out of this comment?Rando here. It gives a signal on the account’s other comments, as well as the value of the original comment (as a hypothesis, albeit a wrong one, versus blind raging).
john_strinlai: >"It gives a signal on the account's other comments,"fair enough. i typically use karma as a rough proxy for that, especially when the user has a lot of it (like, in this case, where the poster is #17 on the leaderboard with 100,000+ karma).>as well as the value of the original comment (as a hypothesis, albeit a wrong one, versus blind raging).i dont see, in this case anyways, how or why that distinction would matter or change anything (in this case specifically, what would you change or do differently if it was a hypothesis or simple "raging"?), but im probably missing something.
cubefox: Interesting end of the Anthropic report:> Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them. This gives defenders the advantage. And with the recent release of Claude Code Security in limited research preview, we’re bringing vulnerability-discovery (and patching) capabilities directly to customers and open-source maintainers.> But looking at the rate of progress, it is unlikely that the gap between frontier models’ vulnerability discovery and exploitation abilities will last very long. If and when future language models break through this exploitation barrier, we will need to consider additional safeguards or other actions to prevent our models from being misused by malicious actors.> We urge developers to take advantage of this window to redouble their efforts to make their software more secure. For our part, we plan to significantly expand our cybersecurity efforts, including by working with developers to search for vulnerabilities (following the CVD process outlined above), developing tools to help maintainers triage bug reports, and directly proposing patches.
mmis1000: It's not really bad or not though. It's a more directed than the rest fuzzer. While being able to craft a payload that trigger flaw in deep flow path. It could also miss some obvious pattern that normal people don't think it will have problem (this is what most fuzzer currently tests)
hu3: There's much more to a browser than JS engine.They picked to most open-source one.
SahAssar: WebKit is not open source?Sure there are closed source parts of Safari, but I'd guess at least 90% of safari attack surface is in WebKit and it's parts.
Normal_gaussian: In many cases, the difference between a bug and an attack vector lies in the closed source areas.This is going to be the case automating attack detection against most programs where a portion is obscured.
rs_rs_rs_rs_rs: >In many cases, the difference between a bug and an attack vector lies in the closed source areas.You say many cases, let's see some examples in Safari.
TheBicPen: > you dont get that much karma if you are consistently posting bad takes.I wonder how true that is. While this site doesn't have incentivize engagement-maximizing behaviour (posting ragebait) like some other sites do, I would imagine that simply posting more is the best way to accrue karma long-term.
john_strinlai: >I would imagine that simply posting more is the best way to accrue karma long-term.i definitely agree, which is why i use it as a rough proxy rather than ground truth, but i have my doubts that you can casually "post more" your way into the top 20 karma users of all time.
moffkalast: we can put that one next to the Weird AI Yankovic music generator.
dartharva: That's what people back then must have talked about small offshoots like Google and Microsoft back when silicon valley was nascent
tptacek: I think a lot of people are overreading this and really all that's happened here is that I was out at a show last night and was really foggy when I woke up and asked a question clumsily. It happens!
john_strinlai: yeah, absolutely, i was not intending to start some big inquisition against you or anything.just like you were genuinely trying to understand where pjmlp was coming from, i was genuinely trying to understand what you would get out of an answer to your question (or, like, what the next reply could even be other than "ok, cool").
sfink: > For the JS shell, similar to fuzzing, a small fraction of these bugs were bugs in the shell itself (i.e. testing only)There's some nuance here. I fixed a couple of shell-only Anthropic issues. At least mine were cases where the shell-only testing functions created situations that are impossible to create in the browser. Or at least, after spending several days trying, I managed to prove to myself that it was just barely impossible. (And it had been possible until recently.)We do still consider those bugs and fix them one way or the other -- if the bug really is unreachable, then the testing function can be weakened (and assertions added to make sure it doesn't become reachable in the future). For the actual cases here, it was easier and better to fix the bug and leave the testing function in place.We love fuzz bugs, so we try to structure things to make invalid states as brittle as possible so the fuzzers can find them. Assertions are good for this, as are testing functions that expose complex or "dangerous" configurations that would otherwise be hard to set up just by spewing out bizarre JS code or whatever. It causes some level of false positives, but it greatly helps the fuzzers find not only the bugs that are there, but also the ones that will be there in the future.(Apologies for amusing myself with the "not only X, but also Y" writing pattern.)
sfink: As someone who saw a bunch of these bugs come in (and fixed a few), I'd say that Anthropic's associated writeup at https://www.anthropic.com/news/mozilla-firefox-security undersells it a bit. They list the primary benefits as: 1. Accompanying minimal test cases 2. Detailed proofs-of-concept 3. Candidate patches This is most similar to fuzzing, and in fact could be considered another variant of fuzzing, so I'll compare to that. Good fuzzing also provides minimal test cases. The Anthropic ones were not only minimal but well-commented with a description of what it was up to and why. The detailed descriptions of what it thought the bug was were useful even though they were the typical AI-generated descriptions that were 80% right and 20% totally off base but plausible-sounding. Normally I don't pay a lot of attention to a bug filer's speculations as to what is going wrong, since they rarely have the context to make a good guess, but Claude's were useful and served as a better starting point than my usual "run it under a debugger and trace out what's happening" approach. As usual with AI, you have to be skeptical and not get suckered in by things that sound right but aren't, but that's not hard when you have a reproducible test case provided and you yourself can compare Claude's explanations with reality.The candidate patches were kind of nice. I suspect they were more useful for validating and improving the bug reports (and these were very nice bug reports). As in, if you're making a patch based on the description of what's going wrong, then that description can't be too far off base if the patch fixes the observed problem. They didn't attempt to be any wider in scope than they needed to be for the reported bug, so I ended up writing my own. But I'd rather them not guess what the "right" fix was; that's just another place to go wrong.I think the "proofs-of-concept" were the attempts to use the test case to get as close to an actual exploit as possible? I think those would be more useful to an organization that is doubtful of the importance of bugs. Particularly in SpiderMonkey, we take any crash or assertion failure very seriously, and we're all pretty experienced in seeing how seemingly innocuous problems can be exploited in mind-numbingly complicated ways.The Anthropic bug reports were excellent, better even than our usual internal and external fuzzing bugs and those are already very good. I don't have a good sense for how much juice is left to squeeze -- any new fuzzer or static analysis starts out finding a pile of new bugs, but most tail off pretty quickly. Also, I highly doubt that you could easily achieve this level of quality by asking Claude "hey, go find some security bugs in Firefox". You'd likely just get AI slop bugs out of that. Claude is a powerful tool, but the Anthropic team also knew how to wield it well. (They're not the only ones, mind.)
devin: I guess my feeling is that formal verification _even in the LLM era_ still feels heavy-handed/too expensive for too little value for a lot of the problems I'm working on.