Discussion
Search code, repositories, users, issues, pull requests...
itsthecourier: https://github-production-user-asset-6210df.s3.amazonaws.com...demo shows a huge love for water, this AI knows its home
nickcw: > bitnet.cpp is the official inference framework for 1-bit LLMs (e.g., BitNet b1.58). It offers a suite of optimized kernels, that support fast and lossless inference of 1.58-bit models on CPU and GPU (NPU support will coming next).One bit or one trit? I am confused!
drsopp: "1-bit LLMs" is just marketing. The Shannon entropy of one letter with a 3 symbol alphabet (-1, 0, 1) is 1.58.
LuxBennu: The title is misleading — there's no trained 100B model, just an inference framework that claims to handle one. But the engineering is worth paying attention to. I run quantized 70B models locally (M2 Max 96GB, llama.cpp + LiteLLM), and memory bandwidth is always the bottleneck. The 1.58-bit approach is interesting because ternary weights turn matmuls into additions — a fundamentally different compute profile on commodity CPUs. If 5-7 tok/s on a single CPU for 100B-class models is reproducible, that's a real milestone for on-device inference. Framework is ready. Now we need someone to actually train the model.
butILoveLife: >. I run quantized 70B models locally (M2 Max 96GB, llama.cpp + LiteLLM), and memory bandwidth is always the bottleneck.I imagine you got 96gb because you thought you'd be running models locally? Did you not know the phrase Unified Memory is marketing speak?
giancarlostoro: One of the things I often wonder is "what will be the minimally viable LLM" that can work from just enough information that if it googles the rest it can provide reasonable answers? I'm surprised something like Encyclopedia Britanica hasn't yet (afaik) tried to capitalize on AI by selling their data to LLMs and validating outputs for LLM companies, it would make a night and day difference in some areas I would think. Wikipedia is nice, but there's so much room for human error and bias there.
utopiah: > validating outputs for LLM companiesHow? They can validate thousands if not millions of queries but nothing prevent the millions-th-and-one from being a hallucination. People who would then pay extra for a "Encyclopedia Britanica validated LLM" would then, rightfully so IMHO, complain that "it" suggested them to cook with a dangerous mushroom.
embedding-shape: [delayed]
cubefox: LLM account
embedding-shape: > Framework is ready. Now we need someone to actually train the model.If Microslop aren't gonna train the model themselves to prove their own thesis, why would others? They've had 2 years (I think?) to prove BitNet in at least some way, are you really saying they haven't tried so far?Personally that makes it slightly worrisome to just take what they say at face value, why wouldn't they train and publish a model themselves if this actually led to worthwhile results?
wongarsu: I've also always though that it's an interesting opportunity for custom hardware. Two bit addition is incredibly cheap in hardware, especially compared to anything involving floating point. You could make huge vector instructions on the cheap, then connect it to the fastest memory you can buy, and you have a capable inference chip.You'd still need full GPUs for training, but for inference the hardware would be orders of magnitude simpler than what Nvidia is making
august11: In their demo they're running 3B model.
orbital-decay: Funny enough I now involuntarily take RTFA as a slight slop signal, because all these accounts dutifully read the article before commenting, unlike most HNers who often respond to headlines.
yorwba: Not all of them do: https://news.ycombinator.com/item?id=47335156 There are evidently lots of people experimenting with different botting setups. Some do better at blending in than others.
gregman1: Cannot agree more!
regularfry: You only need GPUs if you assume the training is gradient descent. GAs or anything else that can handle nonlinearities would be fine, and possibly fast enough to be interesting.
rustyhancock: Yes. I had to read it over twice, it does strike me as odd that there wasn't a base model to work with.But it seems the biggest model available is 10B? Somewhat unusual and does make me wonder just how challenging it will be to train any model in the 100B order of magnitude.
wongarsu: Approximately as challenging as training a regular 100B model from scratch. Maybe a bit more challenging because there's less experience with itThe key insight of the BitNet paper was that using their custom BitLinear layer instead of normal Linear layers (as well as some more training and architecture changes) lead to much, much better results than quantizing an existing model down to 1.58 bits. So you end up making a full training run in bf16 precision using the specially adapted architecture
WithinReason: > a fundamentally different compute profile on commodity CPUIn what way? On modern processors, a Fused Multiply-Add (FMA) instruction generally has the exact same execution throughput as a basic addition instruction
actionfromafar: Bitnet encoding more information dense per byte perhaps? CPUs have slow buses so would eke out more use of bandwidth?
Dwedit: Log Base 2 of 3 = ~1.5849625, so that's the limit to how well you can pack three-state values into bits of data.For something more practical, you can pack five three-state values within a byte because 3^5 = 243, which is smaller than 256. To unpack, you divide and modulo by 3 five separate times. This encodes data in bytes at 1.6 bits per symbol.But the packing of 5 symbols into a byte was not done here. Instead, they packed 4 symbols into a byte to reduce computational complexity (no unpacking needed)
rasz: >1-bit model>packed 4 symbols into a bytemicroslop, typical bunch of two-bit frauds!
vova_hn2: First they claimed that if you use em dashes you are not humanAnd I did not speak outBecause I was not using em dashesThen they claimed that if you're crammar is to gud you r not hmuanAnd I did not spek autBecause mi gramar sukcsThen they claimed that if you actually read the article that you are trying to discuss you are not human...
algoth1: Headline: 100B. Falcon 3 family: 10B. An order of magnitude off
throwaw12: Because this is Microsoft, experimenting and failing is not encouraged, taking less risky bets and getting promoted is. Also no customer asked them to have 1-bit model, hence PM didn't prioritize it.But it doesn't mean, idea is worthless.You could have said same about Transformers, Google released it, but didn't move forward, turns out it was a great idea.
embedding-shape: > You could have said same about Transformers, Google released it, but didn't move forward,I don't think you can, Google looked at the research results, and continued researching Transformers and related technologies, because they saw the value for it particularly in translations. It's part of the original paper, what direction to take, give it a read, it's relatively approachable for being a machine learning paper :)Sure, it took OpenAI to make it into an "assistant" that answered questions, but it's not like Google was completely sleeping on the Transformer, they just had other research directions to go into first.> But it doesn't mean, idea is worthless.I agree, they aren't, hope that wasn't what my message read as :) But, ideas that don't actually pan out in reality are slightly less useful than ideas that do pan out once put to practice. Root commentator seems to try to say "This is a great idea, it's all ready, only missing piece is for someone to do the training and it'll pan out!" which I'm a bit skeptical about, since it's been two years since they introduced the idea.
simonw: Anyone know how hard it would be to create a 1-bit variant of one of the recent Qwen 3.5 models?
nikhizzle: Almost trivial using open source tools, the question is how it performs without calibration/fine tuning.
PeterHolzwarth: Interesting - the account you mention, and the GP, are both doing replies that are themselves all about the same length, and also the same length between the two accounts. I get what you mean.
devnotes77: The compute throughput question (whether FMA equals ADD on modern CPUs) is accurate — that's not where the gain is. The real win is memory footprint.A 100B ternary model packs to roughly 20-25GB (100B params at ~1.58 bits each). FP16 would be ~200GB, INT4 ~50GB. That difference is what moves the "doesn't fit" threshold. You go from needing HBM or multi-GPU NVLink to running on a workstation with 32GB DDR5.DDR5 at ~100 GB/s is still much slower than HBM at ~3 TB/s, so memory bandwidth is still the inference bottleneck — but bandwidth is only a problem once the model actually fits. For 100B-class models, capacity was the harder constraint. That's what 1.58-bit actually solves.
nkohari: I would love to understand the thought process behind this. I'm sure it's a fun experiment, to see if it's possible and so on... but what tangible benefit could there be to burning tokens to spam comments on every post?
K0balt: I’ve been rounded up for things I wrote two decades ago because of my em dashes lol. The pitchfork mentality gives me little hope for how things are going to go once we have hive mind AGI robots pervasive in society.
wongarsu: The results would probably be underwhelming. The bitnet paper doesn't give great baselines to compare to, but in their tests a 2B network trained for 1.58bits using their architecture was better than Llama 3 8B quantized to 1.58bits. Though that 2B network was about on par with a 1.5B qwen2.5.If you have an existing network, making an int4 quant is the better tradeoff. 1.58b quants only become interesting when you train the model specifically for itOn the other hand maybe it works much better than expected because llama3 is just a terrible baseline
vova_hn2: If I was operating a bot farm, at this point I would probably add some bots that go around and accuse legit human users (or just random users) of being bots.Created confusion and frustration will make it much harder to separate signal from the noise for most people.
intrasight: It's not so much a "minimally viable LLM" but rather an LLM that knows natural language well but knows nothing else. Like me - as an engineer who knows how to troubleshoot in general but doesn't know about a specific device like my furnace (recent example).And I don't think that LLM could just Google or check Wikipedia.But I do agree that this architecture makes a lot of sense. I assume it will become the norm to use such edge LLMs.
giancarlostoro: Correct! I know RAG is a thing, but I wish we could have "DLCs" for LLMs like image generation has LoRa's which are cheaper to train for than retraining the entire model, and provide more output like what you want. I would love to pop in the CS "LoRa or DLC" and ask it about functional programming in Elixir, or whatever.Maybe not crawl the web, but hit a service with pre-hosted, precurated content it can digest (and cache) that doesn't necessarily change often enough. You aren't using it for the latest news necessarily, but programming is mostly static knowledge a a good example.
giancarlostoro: When GPT 3.5 became a thing, it had crawled a very nuanced set of websites, this is what I mean. You basically curate where it sources data from.
bee_rider: Isn’t that sort of what a RAG is? You’d need an LLM “smart” enough to turn natural-user prompts into searches, then some kind of search, then an LLM “smart” though to summarize the results.
giancarlostoro: Yeah, I think RAG is the idea that will lead us there, though its a little complicated, because for some subjects, say Computer Science, you need a little more than just "This is Hello World in Go" you might need to understand not just Go syntax on the fly, but more CS nuances that are not covered in one single simple document. The idea being having a model that runs fully locally on a phone or laptop with minimal resources. On the other hand, I can also see smaller models talking to larger models that are cheaper to run in the cloud. I am wondering if this is the approach Apple might take with Siri, specifically in order to retain user privacy as much as possible.
radarsat1: I'm curious if 1-bit params can be compared to 4- or 8-bit params. I imagine that 100B is equivalent to something like a 30B model? I guess only evals can say. Still, being able to run a 30B model at good speed on a CPU would be amazing.
regularfry: At some point you hit information limits. With conventional quantisation you see marked capability fall-off below q5. All else being equal you'd expect an N-parameter 5-bit quant to be roughly comparable to a 3N-parameter ternary, if they are trained to the same level, just in terms of the amount of information they can possibly hold. So yes, 100B ternary would be within the ballpark of a 30B q5 conventional model, with a lot of hand-waving and sufficiently-smart-training
naasking: I think the README [1] for the new CPU feature is of more interest, showing linear speedups with number of threads. Up to 73 tokens/sec with 8 threads (64 toks/s for their recommended Q6 quant):https://github.com/microsoft/BitNet/blob/main/src/README.md
RandomTeaParty: > The 1.58-bit approachcan we stop already with these decimals and just call it "1 trit" which it exactly is?