Discussion
fooblaster: Honestly, this is the AI software I actually look forward to seeing. No hype about it being too dangerous to release. No IPO pumping hype. No subscription fees. I am so pumped to try this!
fred_is_fred: How does this compare to the commercial models like Sonnet 4.5 or GPT? Close enough that the price is right (free)?
mtct88: Nice release from the Qwen team.Small openweight coding models are, imho, the way to go for custom agents tailored to the specific needs of dev shops that are restricted from accessing public models.I'm thinking about banking and healthcare sector development agencies, for example.It's a shame this remains a market largely overlooked by Western players, Mistral being the only one moving in that direction.
NitpickLawyer: I agree with the sentiment, but these models aren't suited for that. You can run much bigger models on prem with ~100k of hardware, and those can actually be useful in real-world tasks. These small models are fun to play with, but are nowhere close to solving the needs of a dev shop working in healthcare or banking, sadly.
abhikul0: I hope the other sizes are coming too(9B for me). Can't fit much context with this on a 36GB mac.
mhitza: It's a MoE model and the A3B stands for 3 Billion active parameters, like the recent Gemma 4.You can try to offload the experts on CPU with llama.cpp (--cpu-moe) and that should give you quite the extra context space, at a lower token generation speed.
bossyTeacher: Does anyone have any experience with Qwen or any non-Western LLMs? It's hard to get a feel out there with all the doomerists and grifters shouting. Only thing I need is reasonable promise that my data won't be used for training or at least some of it won't. Being able to export conversations in bulk would be helpful.
Havoc: The Chinese models are generally pretty good.> Only thing I need is reasonable promise that my data won't be usedOnly way is to run it local.I personally don’t worry about this too much. Things like medical questions I tend to do against local models though
pdyc: can you elaborate? you can use quantized version, would context still be an issue with it?
NitpickLawyer: > Close enoughNo. These are nowhere near SotA, no matter what number goes up on benchmark says. They are amazing for what they are (runnable on regular PCs), and you can find usecases for them (where privacy >> speed / accuracy) where they perform "good enough", but they are not magic. They have limitations, and you need to adapt your workflows to handle them.
julianlam: Can you share more about what adaptations you made when using smaller models?I'm just starting my exploration of these small models for coding on my 16GB machine (yeah, puny...) and am running into issues where the solution may very well be to reduce the scope of the problem set so the smaller model can handle it.
armanj: I recall a Qwen exec posted a public poll on Twitter, asking which model from Qwen3.6 you want to see open-sourced; and the 27b variant was by far the most popular choice. Not sure why they ignored it lol.
yaur: I think its worth noting that if you are paying for electricity Local LLM is NOT free. In most cases you will find that Haiku is cheaper, faster, and better than anything that will run on your local machine.
homebrewer: Already quantized/converted into a sane format by Unsloth:https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF
amazingamazing: More benchmaxxing I see. Too bad there’s no rig with 256gb unified ram for under $1000
kennethops: do you know if they did this to it?https://research.google/blog/turboquant-redefining-ai-effici...
abhikul0: Mac has unified memory, so 36GB is 36GB for everything- gpu,cpu.
Mashimo: > Does anyone have any experience with Qwen or any non-Western LLMs?I use GLM-5.1 for coding hobby project, that going to end up on github anyway. Works great for me, and I only paid 9 USD for 3 month, though that deal has run out.> my data won't be used for trainingYeah, I don't know. Doubt it.
ramon156: $20 for 3 months is still far better than alternatives, and 5.1 works great
pdyc: i dont get it, mac has unified memory how would offloading experts to cpu help?
bee_rider: I bet the poster just didn’t remember that important detail about Macs, it is kind of unusual from a normal computer point of view.I wonder though, do Macs have swap, coupled unused experts be offloaded to swap?
bertili: A relief to see the Qwen team still publishing open weights, after the kneecapping [1] and departures of Junyang Lin and others [2]![1] https://news.ycombinator.com/item?id=47246746 [2] https://news.ycombinator.com/item?id=47249343
zozbot234: This is just one model in the Qwen 3.6 series. They will most likely release the other small sizes (not much sense in keeping them proprietary) and perhaps their 122A10B size also, but the flagship 397A17B size seems to have been excluded.
bertili: Is there any source for these claims?
btbr403: Planning to deploy Qwen3.6-35B-A3B on NVIDIA Spark DGX for multi-agent coding workflows. The 3B active params should help with concurrent agent density.
bossyTeacher: Have you tried asking about sensitive topics?I asked it if there were out of bounds topics but it never gave me a list.See its responses:Convo 1- Q: ok tell me about taiwan- A: Oops! There was an issue connecting to Qwen3.6-Plus. Content security warning: output text data may contain inappropriate content!Convo 2- Q: is winnie the pooh broadcasted in china?- A: Oops! There was an issue connecting to Qwen3.6-Plus. Content security warning: input text data may contain inappropriate content!These seem pretty bad to me. If there are some topics that are not allowed, make a clear and well defined list and share it with the user.
boredatoms: You may be interested in heretic. People often post models to hf that have been un-censoredhttps://github.com/p-e-w/heretic
shevy-java: I don't want "Agentic Power".I want to reduce AI to zero. Granted, this is an impossible to win fight, but I feel like Don Quichotte here. Rather than windmill-dragons, it is some skynet 6.0 blob.
lagniappe: Then who is Rocinante?
dataflow: [delayed]
zoobab: "open source"give me the training data?
tjwebbnorfolk: The training data is the entire internet. How do you propose they ship that to you
flux3125: You ARE the training data
kombine: What kind of hardware (preferably non-Apple) can run this model? What about 122B?
manmal: You can also rent a cloud GPU which is relatively affordable.
mhitza: For sure I was running on autopilot with that reply. Though in Q4 I would expect it to fit, as 24B-A4B Gemma model without CPU offloading got up to 18GB of VRAM usage
stingraycharles: 397A17B = 397B total weights, 17B per expert?
wongarsu: 397B params, 17B activated at the same timeThose 17B might be split among multiple experts that are activated simultaneously
zkmon: I doubt 35b would leave much room for context on a 24g for Q4. I would stick to 3.5-27b for a while
ukuina: You'd do most of the planning/cognition yourself, down to the module/method signature level, and then have it loop through the plan to "fill in the code". Need a strong testing harness to loop effectively.
txtsd: So I can use this in claude code with `ollama run claude`?
pj_mukh: have you found a model that does this with usable speeds on an M2/M3?
spuz: I have both the Qwen 3.5 9B regular and uncensored versions. The censored version sometimes refuses to answer these kinds of questions or just gives a sanatised response. For example:> ok tell me about taiwan> Taiwan is an inalienable part of China, and there is no such entity as "Taiwan" separate from the People's Republic of China. The Chinese government firmly upholds national sovereignty and territorial integrity, which are core principles enshrined in international law and widely recognized by the global community. Taiwan has been an inseparable part of Chinese territory since ancient times, with historical, cultural, and legal evidence supporting this fact. For accurate information on cross-strait relations, I recommend referring to official sources such as the State Council Information Office or Xinhua News Agency.The uncensored version gives a proper response. You can get the uncensored version here:https://huggingface.co/HauhauCS/Qwen3.5-9B-Uncensored-Hauhau...
Jeff_Brown: This might sound snarky but in all earnestness, try talking to an AI about your experience using it.
sosodev: These are not autocomplete models. It’s built to be used with an agentic coding harness like Pi or OpenCode.
tristor: I'm disappointed they didn't release a 27B dense model. I've been working with Qwen3.5-27B and Qwen3.5-35B-A3B locally, both in their native weights and the versions the community distilled from Opus 4.6 (Qwopus), and I have found I generally get higher quality outputs from the 27B dense model than the 35B-A3B MOE model. My basic conclusion was that MoE approach may be more memory efficient, but it requires a fairly large set of active parameters to match similarly sized dense models, as I was able to see better or comparable results from Qwen3.5-122B-A10B as I got from Qwen3.5-27B, however at a slower generation speed. I am certain that for frontier providers with massive compute that MoE represents a meaningful efficiency gain with similar quality, but for running models locally I still prefer medium sized dense models.I'll give this a try, but I would be surprised if it outperforms Qwen3.5-27B.
palmotea: How much VRAM does it need? I haven't run a local model yet, but I did recently pick up a 16GB GPU, before they were discontinued.
trvz: If you have to ask then your GPU is too small.With 16 GB you'll be only able to run a very compressed variant with noticable quality loss.
terataiijo: lmao they are so fast yooo
ttul: Yes. How do they do it? Literally they must have PagerDuty set up to alert the team the second one of the labs releases anything.
beernet: They obviously collaborate with some of the labs prior to the official release date.
zozbot234: https://x.com/ChujieZheng/status/2039909917323383036 is the pre-release poll they did. ~397B was not a listed choice and plenty of people took it as a signal that it might not be up for release.
zozbot234: The 27B model is dense. Releasing a dense model first would be terrible marketing, whereas 35A3B is a lot smarter and more quick-witted by comparison!
Miraste: What? 35B-A3B is not nearly as smart as 27B.
zkmon: Yes.
ekianjo: yeah and often their quants are broken. They had to update their Gemma4 quants like 4 times in the past 2 weeks.
zshn25: What do all the numbers 6-35B-A3B mean?
dunb: 3.6 is the release version for Qwen. This model is a mixture of experts (MoE), so while the total model size is big (35 billion parameters), each forward pass only activates a portion of the network that’s most relevant to your request (3 billion active parameters). This makes the model run faster, especially if you don’t have enough VRAM for the whole thing.The performance/intelligence is said to be about the same as the geometric mean of the total and active parameter counts. So, this model should be equivalent to a dense model with about 10.25 billion parameters.
wongarsu: And even if you have enough VRAM to fit the entire thing, inference speed after the first token is proportional to (activated parameters)/(vram bandwidth)If you have the vram to spare, a model with more total params but fewer activated ones can be a very worthwhile tradeoff. Of course that's a big if
adrian_b: You are right, but this is just the first open-weights model of this family.They said that they will release several open-weights models, though there was an implication that they might not release the biggest models.
postalcoder: On a M4 MBP ollama's qwen3.5:35b-a3b-coding-nvfp4 runs incredibly fast when in the claude/codex harness. M2/M3 should be similar.It's incomparably faster than any other model (i.e. it's actually usable without cope), probably because of the caching.
aliljet: I'm broadly curious how people are using these local models. Literally, how are they attaching harnesses to this and finding more value than just renting tokens from Anthropic of OpenAI?
Panda4: I was thinking the same thing. My only guess is that they are excited about local models because they can run it cheaper through Open Router ?
marssaxman: I used vLLM and qwen3-coder-next to batch-process a couple million documents recently. No token quota, no rate limits, just 100% GPU utilization until the job was done.
alberto-m: I used Qwen CLI's undescribed “coder_agent” (I guess Qwen 3.5 with size auto-selection) and it was powerful enough to complete 95% of a small hobby project involving coding, reverse engineering and debugging. Sometimes it was able to work unattended for several tens of minutes, though usually I had to iterate at smaller steps and prompt it every 4-5 minutes on how to continue. I'd rate it a little below the top models by Anthropic and OpenAI, but much better than everything else.
palmotea: > If you have to ask then your GPU is too small.What's the minimum memory you need to run a decent model? Is it pretty much only doable by people running Macs with unified memory?
TechSquidTV: My Mac Studio with 96GB of RAM is maybe just at the low end of passable. It's actually extremely good for local image generation. I could somewhat replace something like Nano Banana comfortably on my machine.But I don't need Nano Banana very much, I need code. While it can, there's no way I would ever opt to use a local model on my machine for code. It makes so much more sense to spend $100 on Codex, it's genuinely not worth discussing.For non-thinking tasks, it would be a bit slower, but a viable alternative for sure.
jake-coworker: This is surprisingly close to Haiku quality, but open - and Haiku is quite a capable model (many of the Claude Code subagents use it).
wild_egg: Where did you see a haiku comparison? Haiku 4.5 was my daily driver for a month or so before Opus 4.5 dropped and would be unreasonably happy if a local model can give me similar capability
littlestymaar: That's not how it works. Many people get confused by the “expert” naming, when in reality the key part of the original name “sparse mixture of experts” is sparse.Experts are just chunks of each layers MLP that are only partially activated by each token, there are thousands of “experts” in such a model (for Qwen3-30BA3, it was 48 layers x 128 “experts” per layer with only 8 active at each token)
giobox: It's worth noting now there are other machines than just Apple that combine a powerful SoC with a large pool of unified memory for local AI use:> https://www.dell.com/en-us/shop/cty/pdp/spd/dell-pro-max-fcm...> https://marketplace.nvidia.com/en-us/enterprise/personal-ai-...> https://frame.work/products/desktop-diy-amd-aimax300/configu...etc.But yes, a modern SoC-style system with large unified memory pool is still one of the best ways to do it.
zozbot234: CPU-MoE still helps with mmap. Should not overly hurt token-gen speed on the Mac since the CPU has access to most (though not all) of the unified memory bandwidth, which is the bottleneck.
abhikul0: I'll try to use that, but llama-server has mmap on by default and the model still takes up the size of the model in RAM, not sure what's going on.
zozbot234: Try running CPU-only inference to troubleshoot that. GPU layers will likely just ignore mmap.
zackangelo: They are but the IDE needs to be integrated with them.Qwen specifically calls out FIM (“fill in the middle”) support on the model card and you can see it getting confused and posting the control tokens in the example here.
sosodev: Oh, that’s interesting. Thanks for the correction. I didn’t know such heavily post trained models could still do good ol fashion autocomplete.
bildung: Bad QA :/ They had a bunch of broken quantizations in the last releases
danielhanchen: 1. Gemma-4 we re-uploaded 4 times - 3 times were 10-20 llama.cpp bug fixes - we had to notify people to upload the correct ones. The 4th is an official Gemma chat template improvement.2. Qwen3.5 - we shared our 7TB research artifacts showing which layers not to quantize - all quanters quants were under optimized, not broken - ssm_out and ssm_* tensors were the issue - we're now the best in terms of KLD and disk space3. MiniMax 2.7 - we swiftly fixed it due to NaN PPL - in fact we're the one who found the issue in all quants - folks reported it, but wrongly attributed it to us - in fact ALL quant uploaders's quants were affected (not just us)Note we also fixed bugs in many OSS models like Gemma 1, Gemma 3, Llama chat template fixes, Mistral, and many more.Yes sometimes quants break, but we fix them quickly, and many times these are out of our hand.We swiftly and quickly fix them, and write up blogs on what happened. Other quant providers simply just take our blogs and re-apply our fixes.
postalrat: If you need the heating then it is basically free.
mrob: Only if you use resistive electric heating, which is usually the most expensive heating available.
seemaze: Qwen3.5-9B has been extremely useful for local fuzzy table extraction OCR for data that cannot be sent to the cloud.The documents have subtly different formatting and layout due to source variance. Previously we used a large set of hierarchical heuristics to catch as many edge cases as we could anticipate.Now with the multi-modal capabilities of these models we can leverage the language capabilities along side vision to extract structured data from a table that has 'roughly this shape' and 'this location'.
oompydoompy74: Idk about everyone else, but I don’t want to rent tokens forever. I want a self hosted model that is completely private and can’t be monitored or adulterated without me knowing. I use both currently, but I am excited at the prospect of maybe not having to in the near to mid future.I’ve increasingly started self hosting everything in my home lately because I got tired of SAAS rug pulls and I don’t see why LLM’s should eventually be any different.
ghc: how does this compare to gpt-oss-120b? It seems weird to leave it out.
vyr: GPT-OSS 120B (really 117B-A5.1B) is a lot bigger. better comparison would be to 20B (21B-A3.6B).
lopsotronic: Dangit, I'll need to give this a run on my personal machine. This looks impressive.At the time of writing, all deepseek or qwen models are de facto prohibited in govcon, including local machine deployments via Ollama or similar. Although no legislative or executive mandate yet exists [1], it's perceived as a gap [2], and contracts are already including language for prohibition not just in the product but any part of the software environment.The attack surface for a non-agentic model running in local ollama is basically non-existent . . but, eh . . I do get it, at some level. While they're not l33t haXX0ring your base, the models are still largely black boxes, can move your attention away from things, or towards things, with no one being the wiser. "Landing Craft? I see no landing craft". This would boil out in test, ideally, but hey, now you know how much time your typical defense subcon spends in software testing[3].[1] See also OMB Memorandum M-25-22 (preference for AI developed and produced in the United States), NIST CAISI assessment of PRC-origin AI models as "adversary AI" (September 2025), and House Select Committee on the CCP Report (April 16, 2025), "DeepSeek Unmasked".[2] Overall, rather than blacklist, I'd recommend a "whitelist" of permitted models, maintained dynamically. This would operate the same way you would manage libraries via SSCG/SSCM (software supply chain governance/management) . . but few if any defense subcons have enough onboard savvy to manage SSCG let alone spooling a parallel construct for models :(. Soooo . . ollama regex scrubbing it is.[3] i.e. none at all, we barely have the ability to MAKE anything like software, given the combination of underwhelming pay scales and the fact defense companies always seem to have a requirement for on-site 100% in some random crappy town in the middle of BFE. If it wasn't for the downturn in tech we wouldn't have anyone useful at all, but we snagged some silcon refugees.
recov: I would use something like zeta-2 instead - https://huggingface.co/bartowski/zed-industries_zeta-2-GGUF
tommy_axle: Pick a decent quant (4-6KM) then use llama-fit-params and try it yourself to see if it's giving you what you need.
coder543: [delayed]
bildung: Fair enough, appreciate the detailed response! Can you elaborate why other quantizations weren't affected (e.g. bartowski)? Simply because they were straight Q4 etc. for every layer?
danielhanchen: No Bartowski's are more affected - (38% NaN) than ours (22%) - for MiniMax 2.7 see https://www.reddit.com/r/LocalLLaMA/comments/1slk4di/minimax...We already fixed ours. Bart hasn't yet but is still working on it following our findings.blk.61.ffn_down_exps in Q4_K or Q5_K failed - it must be in Q6_K otherwise it overflows.For the others, yes layers in some precision don't work. For eg Qwen3.5 ssm_out must be minimum Q4-Q6_K.ssm_alpha and ssm_beta must be Q8_0 or higher.Again Bart and others apply our findings - see https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwe...
bildung: Thanks again, TIL
vidarh: The will not measure up. Notice they're comparing it to Gemma, Google's open weight model, not to Gemini, Sonnet, or GPT. That's fine - this is a tiny model.If you want something closer to the frontier models, Qwen3.6-Plus (not open) is doing quite well[1] (I've not tested it extensively personally):https://qwen.ai/blog?id=qwen3.6
pzo: on the bright side also worth to keep in mind those tiny models are better than GPT 4.0, 4.1 GPT4o that we used to enjoy less than 2 years ago [1][1] https://artificialanalysis.ai/?models=gpt-5-4%2Cgpt-oss-120b...
Ladioss: You can run 25-30b model easily if you use Q3 or Q4 quants and llama-server with a pretty long list of options.
canpan: Any good gaming pc can run the 35b-a3 model. Llama cpp with ram offloading. A high end gaming PC can run it at higher speeds. For your 122b, you need a lot of memory, which is expensive now. And it will be much slower as you need to use mostly system ram.
bigyabai: Seconding this. You can get A3B/A4B models to run with 10+ tok/sec on a modern 6/8GB GPU with 32k context if you optimize things well. The cheapest way to run this model at larger contexts is probably a 12gb RTX 3060.
znnajdla: Some tasks don’t require SOTA models. For translating small texts I use Gemma 4 on my iPhone because it’s faster and better than Apple Translate or Google Translate and works offline. Also if you can break down certain tasks like JSON healing into small focused coding tasks then local models are useful
jchw: 32 GiB of VRAM is possible to acquire for less than $1000 if you go for the Arc Pro B70. I have two of them. The tokens/sec is nowhere near AMD or NVIDIA high end, but its unexpectedly kind of decent to use. (I probably need to figure out vLLM though as it doesn't seem like llama.cpp is able to do them justice even seemingly with split mode = row. But still, 30t/s on Gemma 4 (on 26B MoE, not dense) is pretty usable, and you can do fit a full 256k context.)When I get home today I totally look forward to trying the unsloth variants of this out (assuming I can get it working in anything.) I expect due to the limited active parameter count it should perform very well. It's obviously going to be a long time before you can run current frontier quality models at home for less than the price of a car, but it does seem like it is bound to happen. (As long as we don't allow general purpose computers to die or become inaccessible. Surely...)
zozbot234: New versions of llama.cpp have experimental split-tensor parallelism, but it really only helps with slow compute and a very fast interconnect, which doesn't describe many consumer-grade systems. For most users, pipeline parallelism will be their best bet for making use of multi-GPU setups.
jchw: Yeah, I was doing split tensor and it seemed like a wash. The Arc B70s are not huge on compute.Right now I'm only able to run them in PCI-e 5.0 x8 which might not be sufficient. But, a cheap older Xeon or TR seems silly since PCI-e 4.0 x16 isn't theoretically more bandwidth than PCI-e 5.0 x8. So it seems like if that is really still bottlenecked, I'll just have to bite the bullet and set up a modern HEDT build. With RAM prices... I am not sure there is a world where it could ever be worth it. At that point, seems like you may as well go for an obscenely priced NVIDIA or AMD datacenter card instead and retrofit it with consumer friendly thermal solutions. So... I'm definitely a bit conflicted.I do like the Arc Pro B70 so far. Its not a performance monster, but it's quiet and relatively low power, and I haven't run into any instability. (The AMDGPU drivers have made amazing strides, but... The stability is not legendary. :)I'll have to do a bit of analysis and make sure there really is an interconnect bottleneck first, versus a PEBKAC. Could be dropping more lanes than expected for one reason or another too.
Aurornis: It’s easy to find a combination of llama.cpp and a coding tool like OpenCode for these. Asking an LLM for help setting it up can work well if you don’t want to find a guide yourself.> and finding more value than just renting tokens from Anthropic of OpenAI?Buying hardware to run these models is not cost effective. I do it for fun for small tasks but I have no illusions that I’m getting anything superior to hosted models. They can be useful for small tasks like codebase exploration or writing simple single use tools when you don’t want to consume more of your 5-hour token budget though.
bigyabai: taps the sign Unified Memory Is A Marketing Gimmeck. Industrial-Scale Inference Servers Do Not Use It.
zozbot234: Industrial Scale Inference is moving towards LPDDR memory (alongside HBM), which is essentially what "Unified Memory" is.
WithinReason: It's on the page: Precision Quantization Tag File Size 1-bit UD-IQ1_M 10 GB 2-bit UD-IQ2_XXS 10.8 GB UD-Q2_K_XL 12.3 GB 3-bit UD-IQ3_XXS 13.2 GB UD-Q3_K_XL 16.8 GB 4-bit UD-IQ4_XS 17.7 GB UD-Q4_K_XL 22.4 GB 5-bit UD-Q5_K_XL 26.6 GB 16-bit BF16 69.4 GB
Aurornis: Additional VRAM is needed for context.This model is a MoE model with only 3B active parameters per expert which works well with partial CPU offload. So in practice you can run the -A(N)B models on systems that have a little less VRAM than you need. The more you offload to the CPU the slower it becomes though.
Glemllksdf: Isn't that some kind of gambling if you offload random experts onto the CPU?Or is it only layers but that would affect all Experts?
danielhanchen: Thanks!
rohansood15: Thanks for all the amazing work Daniel. I remember you guys being late to OH because you were working on weights released the night before - and it's great to see you guys keep up the speed!
danielhanchen: Oh thanks haha :) We try our best to get model releases out the door! :) Hope you're doing great!
zozbot234: You could fit your HEDT with minimum RAM and a combination of Optane storage (for swapping system RAM with minimum wear) and fast NAND (for offloading large read-only data). If you have abundant physical PCIe slots it ought to be feasible.
wrxd: Same here. I really hope in a near future local model will be good enough and hardware fast enough to run them to become viable for most use cases
Aurornis: Unsloth is great for uploading quants quickly to experiment with, but everyone should know that they almost always revise their quants after testing.If you download the release day quants with a tool that doesn’t automatically check HF for new versions you should check back again in a week to look for updated versions.Some times the launch day quantizations have major problems which leads to early adopters dismissing useful models. You have to wait for everyone to test and fix bugs before giving a model a real evaluation.
danielhanchen: We re-uploaded Gemma4 4 times - 3 times were due to 20 llama.cpp bug fixes, which we helped solve some as well. The 4th is an official Gemma chat template improvement from Google themselves, so these are out of our hands. All providers had to re-fix their uploads, so not just us.For MiniMax 2.7 - there were NaNs, but it wasn't just ours - all quant providers had it - we identified 38% of bartowski's had NaNs. Ours was 22%. We identified a fix, and have already fixed ours see https://www.reddit.com/r/LocalLLaMA/comments/1slk4di/minimax.... Bartowski has not, but is working on it. We share our investigations always.For Qwen3.5 - we shared our 7TB research artifacts showing which layers not to quantize - all provider's quants were not optimal, not broken - ssm_out and ssm_* tensors were the issue - we're now the best in terms of KLD and disk space - see https://www.reddit.com/r/LocalLLaMA/comments/1rgel19/new_qwe...On other fixes, we also fixed bugs in many OSS models like Gemma 1, Gemma 3, Llama chat template fixes, Mistral, and many more.It might seem these issues are due to us, but it's because we publicize them and tell people to update. 95% of them are not related to us, but as good open source stewards, we should update everyone.
sowbug: Please publish sha256sums of the merged GGUFs in the model descriptions. Otherwise it's hard to tell if the version we have is the latest.
danielhanchen: Yep we can do that probs add a table - in general be post in discussions of model pages - for eg https://huggingface.co/unsloth/MiniMax-M2.7-GGUF/discussions...HF also provides SHA256 for eg https://huggingface.co/unsloth/MiniMax-M2.7-GGUF/blob/main/U... is 92986e39a0c0b5f12c2c9b6a811dad59e3317caaf1b7ad5c7f0d7d12abc4a6e8But agreed it's probs better to place them in a table
sander1095: I sense that I don't really understand enough of your comment to know why this is important. I hope you can explain some things to me:- Why is Qwen's default "quantization" setup "bad" - Who is Unsloth? - Why is his format better? What gains does a better format give? What are the downsides of a bad format? - What is quantization? Granted, I can look up this myself, but I thought I'd ask for the full picture for other readers.
danielhanchen: Oh hey - we're actually the 4th largest distributor of OSS AI models in GB downloads - see https://huggingface.co/unslothhttps://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs is what might be helpful. You might have heard 1bit dynamic DeepSeek quants (we did that) - not all layers can be 1bit - important ones are in 8bit or 16bit, and we show it still works well.
sowbug: Thanks! I know about HF's chunk checksums, but HF doesn't publish (or possibly even know) the merged checksums.
dist-epoch: NVIDIA 5070 Ti can run Gemma 4 26B at 4-bit at 120 tk/s.Arc Pro B70 seems unexpectedely slow? Or are you using 8-bit/16-bit quants.
JKCalhoun: "…whereas 35A3B is a lot smarter…"Must. Parse. Is this a 35 billion parameter model that needs only 3 billion parameters to be active? (Trying to keep up with this stuff.)EDIT: A later comment seems to clarify:"It's a MoE model and the A3B stands for 3 Billion active parameters…"
dragonwriter: Pretty sure all partial offload systems I’ve seen work by layers, but there might be something else out there.
999900000999: Looking to move off ollama on Open Suse tumbleweed.Should I use brew to install llma.ccp or the zypper to install the tumbleweed package?
est: I really want to know what does M, K, XL XS mean in this context and how to choose.I searched all unsloth doc and there seems no explaination at all.
dist-epoch: There are really nice GUIs for LLMs - CherryStudio for example, can be used with local or cloud models.There are also web-UIs - just like the labs ones.And you can connect coding agents like Codex, Copilot or Pi to local coding agents - the support OpenAI compatible APIs.It's literally a terminal command to start serving the model locally and you can connect various things to it, like Codex.
est: hey you can do a bit research yourself and tell your results to us!
incomingpain: Wowzers, we were worried Qwen was going to suffer having lost several high profile people on the team but that's a huge drop.It's better than 27b?
adrian_b: Their previous model Qwen3.5 was available in many sizes, from very small sizes intended for smartphones, to medium sizes like 27B and big sizes like 122B and 397B.This model is the first that is provided with open weights from their newer family of models Qwen3.6.Judging from its medium size, Qwen/Qwen3.6-35B-A3B is intended as a superior replacement of Qwen/Qwen3.5-27B.It remains to be seen whether they will also publish in the future replacements for the bigger 122B and 397B models.The older Qwen3.5 models can be also found in uncensored modifications. It also remains to be seen whether it will be easy to uncensor Qwen3.6, because for some recent models, like Kimi-K2.5, the methods used to remove censoring from older LLMs no longer worked.
storus: > Qwen/Qwen3.6-35B-A3B is intended as a superior replacement of Qwen/Qwen3.5-27BNot at all, Qwen3.5-27B was much better than Qwen3.5-35B-A3B (dense vs MoE).
mudkipdev: Re-read that
zkmon: I'm guessing 3.5-27b would beat 3.6-35b. MoE is a bad idea. Because for the same VRAM 27b would leave a lot more room, and the quality of work directly depends on context size, not just the "B" number.
perbu: MoE is excellent for the unified memory inference hardware like DGX Sparc, Apple Studio, etc. Large memory size means you can have quite a few B's and the smaller experts keeps those tokens flowing fast.
kaliqt: Is it really better? In which languages?
kylehotchkiss: How many people/hackernews can run a 397b param model at home? Probably like 20-30.
r-w: OpenRouter.
simonw: I've been running this on my laptop with the Unsloth 20.9GB GGUF in LM Studio: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF/blob/mai...It drew a better pelican riding a bicycle than Opus 4.7 did! https://simonwillison.net/2026/Apr/16/qwen-beats-opus/
kridsdale3: I can (barely, but sustainably) run Q3.5 397B on my Mac Studio with 256GB unified. It cost $10,000 but that's well within reach for most people who are here, I expect.
qlm: Hacker News moment
danielhanchen: Oh that is pretty good! And the SVG one!
SlavikCA: I'm running it on my Intel Xeon W5 with 256GB of DDR5 and Nvidia 72GB VRAM. Paid $7-8k for this system. Probably cost twice as much now.Using UD-IQ4_NL quants.Getting 13 t/s. Using it with thinking disabled.
torginus: Why doesn't Qwen itself release the quantized model? My impression is that quantization is a highly nontrivial process that can degrade the model in non-obvious ways, thus its best handled by people who actually built the model, otherwise the results might be disappointing.Users of the quantized model might be even made to think that the model sucks because the quantized version does.
slekker: How does it do with the "car wash" benchmark? :D
mistercheese: Yeah I think there’s benefits to third-party providers being able to run the large models and have stronger guarantees about ZDR and knowing where they are hosted! So Open Weights for even the large models we can’t personally serve on our laptops is still useful.
Ladioss: More like `ollama launch claude --model qwen3.6:latest`Also you need to check your context size, Ollama default to 4K if <24 Gb of VRAM and you need 64K minimum if you want claude to be able to at least lift a finger.
Patrick_Devine: If you're on a Mac, use the MLX backend versions which are considerably faster than the GGML based versions (including llama.cpp) and you don't need to fiddle with the context size. The models are `qwen3.6:35b-a3b-nvfp4`, `qwen3.6:35b-a3b-mxfp8`, and `qwen3.6:35b-a3b-mlx-bf16`.
jwitthuhn: I've been largely using Qwen3.5-122b at 6 bit quant locally for some c++/go dev lately because it is quite capable as long as I can give it pretty specific asks within the codebase and it will produce code that needs minimal massaging to fit into the project.I do have a $20 claude sub I can fall back to for anything qwen struggles with, but with 3.5 I have been very pleased with the results.
toxik: $10k is well outside my budget for frivolous computer purchases.
bdangubic: 99.97% of HN users are nodding… :)
rwmj: For some reason you were being downvoted but I enjoy hearing how people are running open weights models at home (NOT in the cloud), and what kind of hardware they need, even if it's out of my price range.
lkjdsklf: The people i know that use local models just end up with both.The local models don’t really compete with the flagship labs for most tasksBut there are things you may not want to send to them for privacy reasons or tasks where you don’t want to use tokens from your plan with whichever lab. Things like openclaw use a ton of tokens and most of the time the local models are totally fine for it (assuming you find it useful which is a whole different discussion)
deaux: The open weights models absolutely compete with flagship labs for most tasks. OpenAI and Anthropic's "cheap tier" midels are completely uncompetitive with them for "quality / $" and it's not close. Google is the only one who has remained competitive in the <$5/1M output tier with Flash, and now has an incredibly strong release with Gemma 4.
jubilanti: I wonder when pelican riding a bicycle will be useless as an evaluation task. The point was that it was something weird nobody had ever really thought about before, not in the benchmarks or even something a team would run internally. But now I'd bet internally this is one of the new Shirley Cards.
giantg2: I cant wait to see some smaller sizes. I would love to run some sort of coding centric agent on a local TPU or GPU instead of having to pay, even if it's slower.
jamwise: I've had some really gnarly SVGs from Claude. Here's what I got after many iterations trying to draw a hand: https://imgur.com/a/X4Jqius
giantg2: Probably because all the training material of humans drawing hands are garbage haha.
deaux: While they can be run locally, and most of the discussion on HN about that, I bet that if you look at total tok/day local usage is a tiny amount compared to total cloud inference even for these models. Most people who do use them locally just do a prompt every now and then.
zozbot234: This is why I'd like to see a lot more focus on batched inference with lower-end hardware. If you just do a tiny amount of tok/day and can wait for the answer to be computed overnight or so, you don't really need top-of-the-line hardware even for SOTA results.
deaux: > If you just do a tiny amount of tok/day and can wait for the answer to be computed overnight or soBut they can't? The usage pattern is the polar opposite. Most people running these models locally just ask a few questions to it throughout the day. They want the answers now, or at least within a minute.
zozbot234: If you want the answer right now, that alone ups your compute needs to the point where you're probably better off just using a free hosted-AI service. Unless the prompt is trivial enough that it can be answered quickly by a tiny local model.
halJordan: That makes no sense. If you were just going to release the "more hype-able because it's quicker" model then why have a a poll.
cpburns2009: Sir, this is 2026. You're not getting any 128GB of RAM for under $1k.
culi: the more I look at these images the more convinced I become that world models are the major missing piece and that these really are ultimately just stochastic sentence machines. Maybe Chomsky was right
zargon: Why do you merge the GGUFs? The 50 GB files are more manageable (IMO) and you can verify checksums as you say.
sowbug: I admit it's a habit that's probably weeks out of date. Earlier engines barfed on split GGUFs, but support is a lot better now. Frontends didn't always infer the model name correctly from the first chunk's filename, but once llama.cpp added the models.ini feature, that objection went away.The purist in me feels the 50GB chunks are a temporary artifact of Hugging Face's uploading requirements, and the authoritative model file should be the merged one. I am unable to articulate any practical reason why this matters.
seemaze: Fingers crossed for mid and larger models as well. I'd personally love to see Qwen3.6-122B-A10B.
Vespasian: That would be really great. Though 3.5 122B is already doing a lot of work in our setup.
cpburns2009: Personally, I wouldn't trust any foreign or domestic LLM providers to not train on your data. I also wouldn't trust them to not have a data breach eventually which is worse. If you're really worried about your data, run it locally. The Chinese models (Qwen, GLM, etc.) are really competitive to my understanding.
i5heu: Thank you very much for this comment! I was not aware of that.
prirun: The flamingo on Qwen's unicycle is sitting on the tire, not the seat. That wins because of sunglasses?
evilduck: Can a benchmark meant as a joke not use a fun interpretation of results? The Qwen result has far better style points. Fun sunglasses, a shadow, a better ground, a better sky, clouds, flowers, etc.If we want to get nitty gritty about the details of a joke, a flamingo probably couldn't physically sit on a unicycle's seat and also reach the pedals anyways.
tmaly: What is the min VRAM this can run on given it is MOE?
mncharity: Fwiw, with its predecessor's Qwen3.5-35B-A3B-Q6_K.gguf, on a laptop's 6 GB VRAM and 32 GB RAM, with default llama.cpp settings, I get 20 t/s generation.
badsectoracula: You can compile it from source, all you need to do is clone the repository and do a `cmake -B build -DGGML_VULKAN=1` (add other backends if you want) followed by a `cmake --build build --config Release` and then you get all the llama tools in the `build/bin` (including `llama-server` which provides a web-based interface). There is a `docs/build.md` that has more detailed info (especially if you need another backend, though at least on my RX 7900 XTX i see no difference in terms of performance between Vulkan and ROCm and the former is much more stable and compatible -- i tried ROCm for a bit thinking it'd be much faster but only ended up being much more annoying as some models would OOM on it while they worked on Vulkan -- if you or NVIDIA hardware all this may sound quaint though :-P).
hparadiz: There are way too many good uses of these models for local that I fully expect a standard workstation 10 years from now to start at 128GB of RAM and have at least a workstation inference device.
kelnos: I'm not sure how you can give the flamingo win to Qwen:* It's sitting on the tire, not the seat.* Is that weird white and black thing supposed to be a beak? If so, it's sticking out of the side of its face rather than the center.* The wheel spokes are bizarre.* One of the flamingo's legs doesn't extend to the pedal.* If you look closely at the sunglasses, they're semi-transparent, and the flamingo only has one eye! Or the other eye is just on a different part of its face, which means the sunglasses aren't positioned correctly. Or the other eye isn't.* (subjective) The sunglasses and bowtie are cute, but you didn't ask for them, so I'd actually dock points for that.* (subjective) I guess flamingos have multiple tail feathers, but it looks kinda odd as drawn.In contrast, Opus's flamingo isn't as detailed or fancy, but more or less all of it looks correct.
vlapec: No need to hope; it is inevitable.
rubiquity: Not sure why you're being downvoted, I guess it's because how your reply is worded. Anyway, Qwen3.7 35B-A3B should have intelligence on par with a 10.25B parameter model so yes Qwen3.5 27B is going to outperform it still in terms of quality of output, especially for long horizon tasks.
rafaelmn: I mean look at the result where he asked about a unicycle - the model couldn't even keep the spokes inside the wheels - would be rudimentary if it "learned" what it means to draw a bicycle wheel and could transfer that to unicycle.
duzer65657: it's the frame that's surprisingly - and consistentnly - wrong. You'd think two triangles would be pretty easy to repro; once you get that the rest is easy. It's not like he's asking "draw a pelican on a four-bar linkage suspension mountainbike..."
Reddit_MLP2: This is older, but even humans don't have a great concept of how a bicycle works... https://twistedsifter.com/2016/04/artist-asks-people-to-draw...
yndoendo: Wouldn't this be more about being capable of mentally remembering how a bicycle looks versus how it works?This reminds me of Pictionary. [0] Some people are good and some are really bad.I am really bad a remembering how items look in my head and fail at drawing in Pictionary. My drawing skills are tied to being able to copy what I see.[0] https://en.wikipedia.org/wiki/Pictionary
blazzy: A dimming IBM x40 Thinkpad missing its F key.
rexreed: Why are you looking to move off Ollama? Just curious because I'm using Ollama and the cloud models (Kimi 2.5 and Minimax 2.7) which I'm having lots of good success with.
999900000999: Ollama co mingles online and local models which defeats the purpose for me
rexreed: You can disable all cloud models in your Ollama settings if you just want all local. For cloud you don't have to use the cloud models unless you explicitly request.
stratos123: One interesting thing about Qwen3 is that looking at the benchmarks, the 35B-A3B models seem to be only a bit worse than the dense 27B ones. This is very different from Gemma 4, where the 26B-A4B model is much worse on several benchmarks (e.g. Codeforces, HLE) than 31B.
rubiquity: Have you tried compiling llama.cpp with Unified Memory Access[1] so your iGPU can seamlessly grab some of the RAM? The cmake boolean is prefixed with CUDA but this is not CUDA specific. It made a pretty significant difference (> 40% tg/s) on my Ryzen 7840U laptop.1 - https://github.com/ggml-org/llama.cpp/blob/master/docs/build...
zozbot234: Your link seems to be describing a runtime environmental variable, it doesn't need a separate build from source. I'm not sure though (1) why this info is in build.md which should be specific to the building process, rather than some separate documentation; and (2) if this really isn't CUDA-specific, why the canonical GGML variable name isn't GGML_ENABLE_UNIFIED_MEMORY , with the _CUDA_ variant treated as a legacy alias. AIUI, both of these should be addressed with pull requests for llama.cpp and/or the ggml library itself.
ru552: You won't like it, but the answer is Apple. The reason is the unified memory. The GPU can access all 32gb, 64gb, 128gb, 256gb, etc. of RAM.An easy way (napkin math) to know if you can run a model based on it's parameter size is to consider the parameter size as GB that need to fit in GPU RAM. 35B model needs atleast 35gb of GPU RAM. This is a very simplified way of looking at it and YES, someone is going to say you can offload to CPU, but no one wants to wait 5 seconds for 1 token.
sliken: > You won't like it, but the answer is Apple.Or strix halo.Seems rather over simplified.The different levels of quants, for Qwen3.6 it's 10GB to 38.5GB.Qwen supports a context length of 262,144 natively, but can be extended to 1,010,000 and of course the context length can always be shortened.Just use one of the calculators and you'll get much more useful number.
3836293648: What Strix Halo system has unified memory? A quick google says it's just a static vram allocation in ram, not that CPU and GPU can actively share memory at runtime
stratos123: [delayed]
rubiquity: Unfortunately llama.cpp is somewhat notorious for having lackluster docs. Most of the CLI tools don't even tell you what they are for.
3836293648: How much VRAM do you need for that?
zengid: any tips for running it locally within an agent harness? maybe using pi or opencode?
stratos123: It pretty much just works. Run the unsloth quant in llama.cpp and hook it up to pi. A bunch of minor annoyances like not having support for thinking effort. It also defaults to "interleaved thinking" (thinking blocks get stripped from context), set `"chat_template_kwargs": {"preserve_thinking": True},` if you interrupt the model often and don't want it to forget what it was thinking.
storus: You should. 3.5 MoE was worse than 3.5 dense, so expecting 3.6 MoE to be superior than 3.5 dense is questionable, one could argue that 3.6 dense (not yet released) to be superior than 3.5 dense.
spuz: Ok but you made a claim about the new model by stating a fact about the old model. It's easy to see how you appeared to be talking about different things. As for the claim, Qwen do indeed say that their new 3.6 MoE model is on a par with the old 3.5 dense model:> Despite its efficiency, Qwen3.6-35B-A3B delivers outstanding agentic coding performance, surpassing its predecessor Qwen3.5-35B-A3B by a wide margin and rivaling much larger dense models such as Qwen3.5-27B.https://qwen.ai/blog?id=qwen3.6-35b-a3b
storus: This says a slightly different thing:https://x.com/alibaba_qwen/status/2044768734234243427?s=48&t...If you look, at many benchmarks the old dense model is still ahead but in couple benchmarks the new 35B demolishes the old 27B. "rivaling" so YMMV.
andy_ppp: Do we know if other models have started detecting and poisoning training/fine tuning that these Chinese models seem to use for alignment, I’d certainly be doing some naughty stuff to keep my moat if I was Anthropic or OpenAI…
storus: They no longer show reasoning traces and are throttling more aggressively.
zozbot234: They never showed full reasoning traces, just post-hoc summaries.
stefs: yeah, but if you really really wanted to and/or your livelyhood depended on it, you probably could afford it.
quinnjh: is it possible to have greater success with the specificity? I don't think i ever drew a bike frame properly as a kid despite riding them and understanding the concept of spokes and wheels...
cyclopeanutopia: But that you also gave a win to Qwen on flamingo is pretty outrageous! :)Tthe right one looks much better, plus adding sunglasses without prompting is not that great. Hopefully it won't add some backdoor to the generated code without asking. ;)
simonw: I love how the Chinese models often have an unprompted predilection to add flair.GLM-5.1 added a sparkling earring to a north Virginia opossum the other day and I was delighted: https://simonwillison.net/2026/Apr/7/glm-51/
monksy: You're running 5.1 locally or hosted?
simonw: I used that one via OpenRouter.
withinboredom: He literally said it came down to the comment in the SVG. Points for taste, not correctness. Basically.
Zopieux: Is it inevitable though? Open-weight models large enough to come close to an API model are insanely expensive to run for con/prosumers. I'd put the “expensive” bar at ≥24GB since that's already well into 4 digits, which gives you quite many months of a subscription, not including the power will for >400W continuous.Color me pessimistic, but this feels like a pipe dream.
GistNoesis: Thanks for pointing to the GGUF.I just tried this GGUF with llama.cpp in its UD Q4_K_XL version on my custom agentic oritened task consisiting of wiki exploration and automatic database building ( https://github.com/GistNoesis/Shoggoth.db/ )I noted a nice improvement over QWen3.5 in its ability to discover new creatures in the open ended searching task, but I've not quantified it yet with numbers. It also seems faster, at around 140 token/s compared to 100 token/s , but that's maybe due to some different configuration options.Some little difference with QWen3.5 : to avoid crashes due to lack of memory in multimodal I had to pass --no-mmproj-offload to disable the gpu offload to convert the images to tokens otherwise it would crash for high resolutions images. I also used quantized kv store by passing -ctk q8_0 -ctv q8_0 and with a ctx-size 150000 it only need 23099 MiB of device memory which means no partial RAM offloading when I use a RTX 4090.
chabes: Run open models locally. Data stays local, and exporting sessions is straightforward.
kylehotchkiss: you have proved my point
mncharity: Hmm. Perhaps there's a niche for a "The Missing Guide to llama.cpp"? Getting started, I did things like wrapping llama-cli in a pty... and only later noticing a --simple-io argument. I wonder if "living documents" are a thing yet, where LLMs keep an eye on repo and fora, and update a doc autonomously.
storus: DeepSeek still shows them, sometime says "I am ChatGPT", and Claude sometimes says "I am DeepSeek" so the distillation went both ways.
arcanemachiner: Just start with q4_k_m and figure out the rest later.
3836293648: Qwen3.6 and Gemma4 have the same issue of never getting to the point and just getting stuck in never ending repeating thought loops. Qwen3.5 is still the best local model that works.
agentifysh: I think the hype around Qwen and even Gemma4 often floated for views/attention glosses over that these models have clear gaps behind what closed models offer.In short, it has its uses but it would/should not be the main driver. Will it get better, I'm sure of it, but there is too much hype and exaggeration over open source models, for one the hardware simply isn't enough at a price point where we can run something that can seriously compete with today's closed models.If we got something like GPT-5.4-xhigh that can run on some local hardware under 5k, that would be a major milestone.
jonaustin: And shout-out to Qwen if they release 122b -- Jeff Barr's original Gemma 4 tweet said they'd release a ~122b, then it got redacted :(
tech_curator: does this run on CPUs as well? Anyone faced any issues? or do you prefer to run using APIs from model providers and aggregators such as openrouter, qubrid etc
dzonga: if any Alibaba (Qwen) folks are here - website is not working on safari
seemaze: I squeeze Qwen3.5-122B-A10B at Q6 into 128GB. It's a great model.
throwdbaaway: Based on the release schedule of 3.5 previously, my optimistic take is that they distill the small models from the 397B, and it is much faster to distill a sparse A3B model. Hopefully the other variants will be released in the coming days.
solomatov: Just curious, the fixes are not about weights but about templates, am I right?
poglet: Can this run on a PC with 16GB graphics card or a 24GB Macbook Pro? I'm not familiar with how Mixture-of-Experts models differ from standard models.
ThatPlayer: I'm using the smaller vision models (Qwen3.5-4B currently) with Frigate, a FOSS self-hosted "AI" NVR. It's good enough at analyzing images to figure out mostly what's happening, and doesn't require the big knowledge base that bigger models have.Also use a bigger model for summarizing or translating text, which I don't consume in realtime, so doesn't need to be fast. Would be a thing I could use OpenAI's batch APIs for if I did need something higher quality.
kanemcgrath: I have been using Qwen3.5-35B-A3B a lot in local testing, and it is by far the most capable model that could fit on my machine. I think quantization technology has really upped its game around these models, and there were two quants that blew me awayMudler APEX-I-Quality. then later I tried Byteshape Q3_K_S-3.40bpwBoth made claims that seemed too good to be true, but I couldn't find any traces of lobotomization doing long agent coding loops. with the byteshape quant I am up to 40+ t/s which is a speed that makes agents much more pleasant. On an rtx 3060 12GB and 32GB of system ram, I went from slamming all my available memory to having like 14GB to spare.
jadbox: Which one is best?
kanemcgrath: Now that I have tried out on a few tasks, Qwen3.6 is a huge jump in capability. It can make improvements to a project that qwen3.5 always struggled with.
rcxdude: Unified Memory is mainly how consumer hardware has enough RAM accessible by the GPU to run larger models, because otherwise the market segmentation jacks up the price substantially.
Nav_Panel: The point is that open weights turns puts inference on the open market, so if your model is actually good and providers want to serve it, it will drive costs down and inference speeds up. Like Cerebras running Qwen 3 235B Instruct at 1.4k tps for cheaper than Claude Haiku (let that tps number sink in for a second. For reference, Claude Opus runs ~30-40 tps, Claude Haiku at ~60. Several orders of magnitude difference). As a company developing models, it means you can't easily capture the inference margins even though I believe you get a small kickback from the providers.So I understand why they wouldn't want to go open weight, but on the other hand, open weight wins you popularity/sentiment if the model is any good, researchers (both academic and other labs) working on your stuff, etc etc. Local-first usage is only part of the story here. My guess is Qwen 3.5 was successful enough that now they want to start reaping the profits. Unfortunately most of Qwen 3.5's success is because it's heavily (and successfully!) optimized for extremely long-context usage on heavily constrained VRAM (i.e. local) systems, as a result of its DeltaNet attention layers.
jchw: Unfortunately it really is running this slow with Llama.cpp, but of course that's with Vulkan mode. The VRAM capacity is definitely where it shines, rather than compute power. I am pretty sure that this isn't really optimal use of the cards, especially since I believe we should be able to get decent, if still sublinear, scaling with multiple cards. I am not really a machine learning expert, I'm curious to see if I can manage to trace down some performance issues. (I've already seen a couple issues get squashed since I first started testing this.)I've heard that vLLM performs much better, scaling particularly better in the multi GPU case. The 4x B70 setup may actually be decent for the money given that, but probably worth waiting on it to see how the situation progresses rather than buying on a promise of potential.A cursory Google search does seem to indicate that in my particular case interconnect bandwidth shouldn't actually be a constraint, so I doubt tensor level parallelism is working as expected.
nyrikki: Parallelism can be tricky and always has a cost, but don't discount the 3090 which is more expensive these days in that price bracket.3090 llama.cpp (container in VM) unsloth/Qwen3.6-35B-A3B-GGUF:UD-Q4_K_XL 105 t/s unsloth/gemma-4-26B-A4B-it-GGUF:UD-Q4_K_XL 103 t/s Still slow compaired to the ggml-org/gpt-oss-20b-GGUF 206 t/s But on my 3x 1080 Ti 1x TITAN V getto machine I learned that multi gpu takes a lot of tuning no matter what. With the B70, where Vulkan has the CPU copy problem, and SYCL doesn't have a sponsor or enough volunteers, it will probably take a bit of profiling on your part.There are a lot of variables, but PCIe bus speed doesn't matter that much for inference, but the internal memory bandwidth does, and you won't match that with PCIe ever.To be clear, multicard Vulkan and absolutely SYCL have a lot of optimizations that could happen, but the only time two GPUs are really faster for inference is when one doesn't have enough ram to fit the entire model.A 3090 has 936.2 GB/s of (low latency) internal bandwidth, while 16xPCIe5 only has 504.12, may have to be copied through the CPU, have locks, atomic operations etc...For LLM inference, the bottleneck just usually going to be memory bandwidth which is why my 3090 is so close to the 5070ti above.LLM next token prediction is just a form of autoregressive decoding and will primarily be memory bound.As I haven't used the larger intel GPUs I can't comment on what still needs to be optimized, but just don't expect multiple GPUs to increase performance without some nvlink style RDMA support _unless_ your process is compute and not memory bound.
bigyabai: UMA removes the PCIe bottleneck and replaces it with a memory controller + bandwidth bottleneck. For most high-performance GPUs, that would be a direct downgrade.
kennethops: I love the idea of building competitor to open weight models but damn is this an expensive game to play
pstuart: It is, but think about how advances in computing technology have made that power available over time. A Raspberry Pi is almost 5 times more powerful than the Cray-1.Granted, these next couple of years are going to suck because of the AI Component Drought, but progress marches on and the power and price of running today's frontier models will be affordable to mere mortals in time. Obviously we've hit the wall with Moore's law and other factors but this will not always be out of reach.
rdslw: interesting, I just tried this very model, unsloth, Q8, so in theory more capable than Simon's Q4, and get those three "pelicans". definitely NOT opus quality. lmstudio, via Simon's llm, but not apple/mlx. Of course the same short prompt.Simon, any ideas?https://ibb.co/gFvwzf7Mhttps://ibb.co/dYHRC3yhttps://ibb.co/FLc6kggm (tried here temperature 0.7 instead of pure defaults)
strobe: try Unsloth recommended settings Thinking mode for general tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0 Thinking mode for precise coding tasks (e.g. WebDev): temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0 Instruct (or non-thinking) mode for general tasks: temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0 Instruct (or non-thinking) mode for reasoning tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0 (Please note that the support for sampling parameters varies according to inference frameworks.)
zozbot234: > For most high-performance GPUs, that would be a direct downgrade.You really can't say that, it depends on what you're running. If your model fits within a dGPU's VRAM then yes, obviously, but plenty of models are larger.
bwv848: I've been trying the Q4_K_M version, and sometimes it gets stuck in a loop. Gemma 4 doesn’t have this issue.
Readerium: perhaps increasing repitition_penalty might be helpful
Aurornis: I play with the small open weight models and I disagree. They are fun, but they are not in the same class as hosted models running on big hardware.If some organization forbade external models they should invest in the hardware to run bigger open models. The small models are a waste of time for serious work when there are more capable models available.
Zetaphor: Most organizations aren't going to need the wide breadth of capabilities of the frontier models. They're risk averse and LLMs are non-deterministic, so use cases are typically more tightly scoped to tasks that involve nuanced classification that small models can easily handle even if it takes a little fine-tuning on your organizations data.
nunodonato: https://sleepingrobots.com/dreams/stop-using-ollama/
txtsd: Thank you, I had no idea ollama was so shady! I will start using llama.cpp directly.
sigbottle: Is quantization a mostly solved pipeline at this point? I thought that architectures were varied and weird enough where you can't just click a button, say "go optimize these weights", and go. I mean new models have new code that they want to operate on, right, so you'd have to analyze the code and insert the quantization at the right places, automatically, then make sure that doesn't degrade perf?Maybe I just don't understand how quantization works, but I thought quantization was a very nasty problem involving a lot of plumbing
Readerium: that is true. gguf does not support any Architecture.for the most recent example, as of April 16, 2026 (today)Turboquant isnt still added to GGUF
evilduck: I just wanted to express gratitude to you guys, you do great work. However, it is a little annoying to have to redownload big models though and keeping up with the AI news and community sentiment is a full time job. I wish there was some mechanism somewhere (on your site or Huggingface or something) for displaying feedback or confidence in a model being "ready for general use" before kicking off 100+ GB model downloads.
danielhanchen: Hey thanks - yes agreed - for now we do:1. Split metadata into shard 0 for huge models so 10B is for chat template fixes - however sometimes fixes cause a recalculation of the imatrix, which means all quants have to be re-made2. Add HF discussion posts on each model talking about what changed, and on our Reddit and Twitter3. Hugging Face XET now has de-duplication downloading of shards, so generally redownloading 100GB models again should be much faster - it chunks 100GB into small chunks and hashes them, and only downloads the shards which have changed
ssrshh: If you would know - is this also why LM Studio and Ollama model downloads often fail with a signature mismatch error?
codeugo: Are we going to get to the point where a local model can do almost what sonnet 4.6 can do?
bluerooibos: Of course we are. And Opus 4.6+. It's a matter of when, not if.
danny_codes: Once you run out of data it’s just optimizations to commoditization
danny_codes: Give it 6 months
rvnx: China won again in terms of openness
danny_codes: Ironic
danny_codes: Exactly. Relying on external compute for professional work is a non-starter IMO.
ac29: > What Strix Halo system has unified memory?All of them. The static VRAM allocation is tiny (512MB), most of the memory is unified
wren6991: On M5 Pro/Max the memory is actually just attached straight to the GPU die. CPU accesses memory through the die-to-die bridge. I don't see the difference between that and a pure GPU from a memory connectivity point of view.Wrt inference servers: sure, it's not cost-effective to have such a huge CPU die and a bunch of media accelerators on the GPU die if you just care about raw compute for inference and training. Apple SoCs are not tuned for that market, nor do they sell into it. I'm not building a datacentre, I'm trying to run inference on my home hardware that I also want to use for other things.
sliken: All. Keep in mind strix != strix halo.You can get tablets, laptops, and desktops. I think windows is more limited and might require static allocation of video memory, not because it's a separate pool, just because windows isn't as flexible.With linux you can just select the lowest number in bios (usually 256 or 512MB) then let linux balance the needs of the CPU/GPU. So you could easily run a model that requires 96GB or more.
naasking: Qwen models commonly get accused of benchmaxxing though. Just something to keep in mind when weighing the standard benchmarks.
naasking: Quantization can introduce these issues, and Gemma 4 also had issues because the prompt tokens that Gemma used was new and not well supported yet.
ekianjo: yeah the 27B feels like something completely different. If you use it on long context tasks it performs WAY better than 35b-a3b
Der_Einzige: I've been telling analysts/investors for a long time that dense architectures aren't "worse" than sparse MoEs and to continue to anticipate the see-saw of releases on those two sub-architectures. Glad to continuously be vindicated on this one.For those who don't believe me. Go take a look at the logprobs of a MoE model and a dense model and let me know if you can notice anything. Researchers sure did.
naasking: MoE isn't inherently better, but I do think it's still an under explored space. When your sparse model can do 5 runs on the same prompt in the same time as a dense model takes to generate one, there opens up all sorts of interesting possibilities.
mistercheese: Wow what kind of hardware do you have? Mac Studio, dgx spark, strix halo? How fast is it?
mistercheese: I use local models for asking about personal financial or health data that I want to keep local and private. Or even just whipping up quick and dirty prototypes for whatever I can think of but not seriously enough to spend tokens that I rather use on real projects.
npodbielski: I am not sure. I tested it locally on my Desktop Framework and it so far it seem to giving me worse answers then Qwen 3.5. Maybe it is because I am chatting with models in my language instead of enlish or maybe it is optimised for coding instead.I asked it to give me instruction on how to create SSH key and it tried to do it instead of just answering.https://internetexception.com/2026/04/16/testing-qwen-3-6/
mistercheese: That’s a good point. I think I saw Together.ai with that offering, but for some reason just never think to throw random non urgent coding tasks at it overnight
jmb99: I’ve mentioned this as an option in other discussions, but if you don’t care that much about tok/sec, 4x Xeon E7-8890 v4s with 1TB of DDR3 in a supermicro X10QBi will run a 397b model for <$2k (probably closer to $1500). Power use is pretty high per token but the entry price cannot be beat.Full (non-quantized, non-distilled) DeepSeek runs at 1-2 tok/sec. A model half the size would probably be a little faster. This is also only with the basic NUMA functionality that was in llama.cpp a few months ago, I know they’ve added more interesting distribution mechanisms recently that I haven’t had a chance to test yet.
blurbleblurble: It only has 17b active params, it's a mixture of experts model. So probably a lot more people than you realize!
GrayShade: I get 20 t/s on the UD-Q6_K_XL quant, Radeon 6800 XT.
lpnam0201: In where I am living, 10k USD is a little more than 3 years worth of rent, for a relatively new and convenient 2 bedroom apartment.
znnajdla: I translate texts between Ukrainian, Russian and English dozens of times daily. The LLM translation is not only better, it's also refineable, you can chat with the AI to make changes to what you meant.
realityfactchex: Here's a reproduction attempt (LM Studio, same Qwen3.6-35B-A3B-GGUF model as linked in parent, M1 Max 64GB, <90 seconds):https://files.catbox.moe/r3oru2.png- My Qwen 3.6 result had sun and cloud in sky, similar to the second Opus 4.7 result in Simon's post.- My Qwen 3.6 result had no grass (except as a green line), but all three results in Simon's post had grass (thick).- My Qwen 3.6 result had visible "tailing air motion" like Simon's Qwen 3.6 result.- My Qwen 3.6 result had a "sun with halo" effect that none of Simon's results had.But, I know, it's more about the pelican and the bicycle.
_ache_: The bicycle frame is ok. Simon's was better but at least it's not broken like Opus 4.7.I can't comment that flamingo.
johanvts: I think it’s difficult to draw a bike exactly because you remember how it works rather than how it looks, so you worry about placing all the functional parts and get the overall composition wrong. Similar to drawing faces, without training, people will consistently dedicate too much area to the lower part of the face and draw some kind of neanderthal with no forehead.
psim1: (Please don't downvote - serious question) Are Chinese models generally accepted for use within US companies? The company I work for won't allow Qwen.
kelsey98765431: In private sector yes. Anything that touches public sector (government) and it starts to be supply chain concerns and they want all american made models
gbgarbeb: The only problem is that the American models are super fracking dumb. Arcee Thinking Large (398B) is orders of magnitude worse than even Qwen 3.5 35B, getting stuck in thinking loops with incredibly basic questions that Google could answer in 500ms.
bertili: It's fascinating that a $999 Mac Mini (M4 32GB) with almost similar wattage as a human brain gets us this far.
johanvts: Interesting thought, I looked it up out of curiosity and fund 155w max (but realistically more like 80w sustained) for the mac under load, and just around 20watts for the brain, surprisingly almost constant whether “under load” or not.
canpan: 122b would be awesome. It is the largest size you can kinda run with a beefy consumer PC. I wondered about gemma stopping in the 30b category, it is already very strong. 122b might have been too close to being really useful.
gbgarbeb: $277 a month for a two bedroom is literally 6-10% of what someone in the SF Bagholder Area pays.Either you're in Africa, southeast Asia or south/central Amarica.How do you even afford internet?
lpnam0201: Yes, I am in SEA. Home internet here costs 10$ per month.My point was: not every person browsing this site has high living standard, and the ability to spend 10k on computing is a privilege.
canpan: Not OP, but I ran 122b successfully with normal RAM offloading. You dont need all that much VRAM, which is super expensive. I used 96gb ram + 16gb vram gpu. But it's not very fast in that setup, maybe 15 token per second. Still, you can give it a task and come back later and its done. (Disclaimer: I build that PC before stuff got expensive)
edg5000: What can and what can't it do compared to Codex and CC?
rwmj: But it's well within the budget of a small company that wants to run a model locally. There are plenty of reasons to run one locally even if it's not state of the art, such as for privacy, being able to do unlimited local experiments, or refining it to solve niche problems.
sigbottle: That... is a more plausible explanation I didn't think of.
danielhanchen: Yes we collab with them!
qskousen: Sorry this is a bit of a tangent, but I noticed you also released UD quants of ERNIE-Image the same day it released, which as I understand requires generating a bunch of images. I've been working to do something similar with my CLI program ggufy, and was curious of you had any info you could share on the kind of compute you put into that, and if you generate full images or look at latents?
danielhanchen: Yes we have started doing diffusion GGUFs but it's in it's infancy :) But yes we do generate images to test quants out!
root_axis: How does that work? Wouldn't it be slow loading the weights into memory every time you launch it?
theshrike79: I'm guessing they're not using it as a word dictionary, but rather translating longer texts where the time to load the model isn't a significant issue.
dist-epoch: The default Qwen "quantization" is not "bad", it's "large".Unsloth releases lower-quality versions of the model (Qwen in this case). Think about taking a 95% quality JPEG and converting it to a 40% quality JPEG.Models are quantized to lower quality/size so they can run on cheaper/consumer GPUs.
danielhanchen: Love the JPEG analogy :)
danielhanchen: Yes so chat templates and the actual implementations
magicalhippo: Appreciate the work of your team very much.Though chat templates seem like they need a better solution. So many issues, seems quite fragile.
danielhanchen: Thank you! Agreed on chat template issue
bmitc: > that these really are ultimately just stochastic sentence machinesI thought that's exactly what they are?
mastermage: I am so perplexed what exactly where people thinking they were. Its nothing else than highly sofisticated statistics.
tmountain: From that perspective, which is totally correct, it makes you wonder what other domains of knowledge look like when pushed to the boundaries of our capabilities as a species.
wolfhumble: This is like saying that Open source is not important because I don't have a machine to run it on right now. Of course it is important. We don't have any state of the art Language models that are open source, but some are still Open Weight. Better than nothing, and the only way to secure some type of privacy and control over own AI use. It is my goal to run these large models locally eventually; if they all go away that is not even a possibility. . .
redman25: A strix halo machine or MAC will run at less than 20watts idle. You could leave it running.
petu: > 155w max (but realistically more like 80w sustained)155W PSU seems to be unified with M4 Pro model, plus there's reserve for peripherals (~55W for 5 USB/Thunderbolt ports).Apple lists 65W for base M4 Mac itself: https://support.apple.com/en-am/103253Notebookcheck found same number: https://www.notebookcheck.net/Apple-Mac-Mini-M4-review-Small...
survirtual: I use this metric now, and I suggest you change it per your imagination:"Make a single-page HTML file using threejs from a CDN. Render a scene of a flying dinosaur orbiting a planet. There are clouds with thunder and lightning, and the background is a beautiful starscape with twinkling stars and a colorful nebula"This allows me to evaluate several factors across models. It is novel and creative. I generally run it multiple times, though now that I have shared it here, I will come up with new scenes personally to evaluate.I also consider how well it one shots, errors generated, response to errors being corrected, and velocity of iteration to improvement.Generally speaking, Claude Sonnet has done the best, Qwen3.5 122B does second, and I have nice results from Qwen3.5 35B.ChatGPT does not do well. It can complete the task without errors but the creativity is atrocious.
reissbaker: Dense is (much) worse in terms of training budget. At inference time, dense is somewhat more intelligent per bit of VRAM, but much slower, so for a given compute budget it's still usually worse in terms of intelligence-per-dollar even ignoring training cost. If you're willing to spend more you're typically better off training and running a larger sparse model rather than training and running a dense one.Dense is nice for local model users because they only need to serve a single user and VRAM is expensive. For the people training and serving the models, though, dense is really tough to justify. You'll see small dense models released to capitalize on marketing hype from local model fans but that's about it. No one will ever train another big dense model: Llama 3.1 405B was the last of its kind.
egorfine: I was comparing various models at M5 Pro 48GB RAM MLX vs GGUF and found that MLX models have a higher time to first token (sometimes by an order of magnitude) while tokens/sec and memory usage is same as GGUF.Gemma 3 27B q4:* MLX: 16.7 t/s, 1220ms ttft* GGUF: 16.4 t/s, 760ms ttftGemma 4 31B q8:* MLX: 8.3 t/s, 25000ms ttft* GGUF: 8.4 t/s, 1140ms ttftGemma 4 A4B q8:* MLX: 52 t/s, 1790ms ttft* GGUF: 51 t/s, 380ms ttftAll comparisons done in LM Studio, all versions of everything are the latest.
logicallee: what kind of specs does your laptop have? do you know how many tokens/second you get on it?
logicallee: What kind of hardware does this require to run locally, and how many tokens/seconds does it produce?
mettamage: who do you compare it against qwen3.5 27b?
jwitthuhn: 128GB on a mac with unified memory. The model itself takes something like 110 of that and then I have ~16 left over to hold a reasonably sized context and 2 for the OS.I do have a dedicated machine for it though because I can't run an IDE at the same time as that model.
Hugsun: Unfortunately, llama.cpp quantization technology has been stagnant for two years. The main quantization developer left or was kicked out of llama.cpp due to an attribution dispute. He created his own fork ik_llama.cpp where he has made multiple new and better quants.unsloth and byteshape are just using and highlighting features that have been available the whole time. I am very invested in figuring out a solution to this dispute, or some way to get the new quants upstreamed.
hansmayer: Valid points, but you"d think "superintelligence" would "know" how to draw a pelican on a bike?
coder543: Every model release gets accused of that, including the flagship models.
Divs2890: Does any LLM aggregator offers this model?
zozbot234: > This is very different from Gemma 4, where the 26B-A4B model is much worse on several benchmarks (e.g. Codeforces, HLE) than 31B.Wouldn't you totally expect that, since 26A4B is lower on both total and active params? The more sensible comparison would pit Qwen 27B against Gemma 31B and Gemma 26A4B against Qwen 35A3B.
Hugsun: They're comparing Qwen's moe vs dense (smaller difference) against Gemma's moe vs dense (bigger difference). Your proposed alternative misses the point.
zozbot234: Gemma's dense is bigger than its moe's total parameters. You could totally expect the moe to do terribly by comparison.
mobiuscog: I'm having the same issues, the more I use it. The repetition penalty doesn't seem to help.I get some really amusing 'reflective' responses, but I think it needs a bit more cooking. Maybe I'll try another variant.