Discussion
CPUs Aren't Dead. Gemma 2B Just Scored Higher Than GPT-3.5 Turbo on the Test That Made It Famous — Your Laptop Can Run It, or Cloudflare for $5/Mo.
100ms: Tiny model overfit on benchmark published 3 years prior to its training. News at 10
bigyabai: But GPT-3.5 was benchmaxxing too.
100ms: GPT 3.5 Turbo knowledge cutoff was circa 2021. MT-Bench is from 2023. Not suggesting improvements on small models aren't possible (or forthcoming, the 1.85 bit etc models look exciting), but this almost certainly isn't that.
roschdal: I yearn for the days when I can program on my PC with a programming llm running on the CPU locally.
trgn: we need sqlite for llms
FergusArgyll: Posters comment is dead. It may be llm-assisted but should prob be vouched for anyway as long as the story isn't flagged.
fredmendoza: appreciate the vouch but come on lol. we ran 80 questions, graded 160 turns by hand, documented 7 error classes, open sourced all the code, and put a live bot up for people to test. to write this post up took me hours. everyone is a critic lol.
semiquaver: This really shows the power of distillation. One thing I find amusing: download the Google Edge Gallery app and one of the chat models, then go into airplane mode and ask it about where it’s deployed. gemma-4-e2b-it is quite confident that it is deployed in a Google datacenter and that deploying it on a phone is completely impossible. The larger 4B model is much subtler: it’s skeptical about the claim but does seem to accept it and sound genuinely impressed and excited after a few turns.
luxuryballs: You can do it on a laptop today, faster with gpu/npu, it’s not going to one shot something complex but you can def pump out models/functions/services, scaffold projects, write bash/powershell scripts in seconds.
fb03: Can you run the same tests on Qwen3.5:9b? that's also a model that runs very well locally, and I believe it's even stronger than Gemma2B
MarsIronPI: [delayed]
MarsIronPI: [delayed]
svnt: > The model does not need to be retrained. It needs surgical guardrails at the exact moments where its output layer flinches.> With those guardrails — a calculator for arithmetic, a logic solver for formal puzzles, a per-requirement verifier for structural constraints, and a handful of regex post-passes — the projected score climbs to ~8.2.Surgical guardrails? Tools, those are just tools.
operatingthetan: >It needs surgical guardrails at the exact moments where its output layer flinches.This article is very clearly shitty LLM output. Abstract noun and verb combos are the tipoff.It's actually quite horrible, it repeats lines from paragraph to paragraph.
smallerize: I know that's one of the tells of AI-generated text, but if anything there's too much of it on this page. The article barely has any complete sentences. I think a human learned "sentence fragments == punchy" and then had too much fun writing at least some of this article.
operatingthetan: My guess is they used the 2b model to write the article as a proof of concept. Which did not prove the concept.
declan_roberts: I'm very surprised at the quality of the new Gemma 4 models. On my 32 gig Mac mini I can be very productive with it. Not close to replacing paid AI by a long shot, but if I had to tighten the belt I could do it as someone who already knows how to program.
fredmendoza: love hearing this. and think about it, if the 2B is already doing this well on your mac mini, imagine what the 4B, 26B, or 31B can do on 32 gigs. with lower quantization you can fit pretty much any of them. if you want full precision you still have solid options at the 2B and 4B level. you're sitting on way more capability than you're probably using right now. the coding block on just the 2B scored 8.44 and caught bugs most people would miss. glad you're getting real use out of it, thanks for reading.
jchw: I don't care anymore, if it happens to violate HN guidelines: Please, authors. Please write your own damn articles. We can absolutely tell that you're using Claude, I promise. (I mean, it might not be Claude specifically this time, but frankly I'd be willing to bet on it.) The AI writing is like nails on a chalkboard to me.
operatingthetan: The worst part is the phrases don't actually mean anything. It's the LLM equivalent of flowery prose. The author admitted below that the article was Claude. So there you go.
fredmendoza: thank you for actually reading it and getting it. the airplane mode test is hilarious, the model sitting on your phone insisting it can't run on a phone. that's amazing. and yes we think exactly the same way. like picture a small business owner with a pi in the back office just quietly processing invoices, drafting email replies, summarizing meeting notes all day. no subscription, no cloud, no one sees their data. that's not a hypothetical, that works right now with this model. when that's free and fits in your pocket the trillion dollar question gets real uncomfortable real fast.
SwellJoe: Terrible article, repetitive AI slop.But, Gemma really is very impressive. The premise that people are paying for GPT-3.5 or using it for serious work is weird, though? GPT-3.5 was bad enough to convince a lot of folks they didn't need to worry about AI. Good enough to be a chatbot for some category of people, but not good enough to actually write code that worked, or prose that could pass for human (that's still a challenge for current SOTA models, as this article written by Claude proves, but code is mostly solved by frontier models).Tiny models are what I find most exciting about AI, though. Gemma 2B isn't Good Enough for anything beyond chatting, AFAIC, and even then it's not very smart. But, Gemma 31B or the MoE 26BA4B probably are Good Enough. And, those run on modest hardware, too, relatively speaking. A 32GB GPU, even an old one, can run either one at 4-bit quantization, and they're OK, competitive with frontier models of 18 months ago. They can write code in popular languages, the code works. They can use tools. They can find bugs. Their prose is good, though still obviously AI slop; too wordy, too flowery. But, you could build real and good software using nothing but Gemma 4 31B, if you're already a good programmer that knows when the LLM is going off on a bizarre tangent. For things where correctness can be proven with tools, a model at the level of Gemma 4 31B can do the job, if slower and with a lot more hand-holding than Opus 4.6 needs.The Prism Bonsai 1-bit 8B model is crazy, too. Less than 2GB on disk, shockingly smart for a tiny model (but also not Good Enough, by my above definition, it's similarly weak to Gemma 2B in my limited testing), and plenty fast on modest hardware.Small models are getting really interesting. When the AI bubble pops (or whatever happens to normalize things, so normal people can buy RAM and GPUs again) we'll be able to do a lot with local models.
melonpan7: Gemma is genuinely impressive, for many trivial quick questions it can replace search engines on my iPhone. Although for reasoning I definitely wouldn’t say it (Gemma 3n E2B) is smart, it unsurprisingly struggled with the classic car wash question.