Discussion
flask-gearQwen3.5 Fine-tuning Guide
antirez: Fine tuning is a story that is nice to tell but that with modern LLMs makes less and less sense. Modern LLMs are so powerful that they are able to few shot learn complicated things, so a strong prompt and augmenting the generation (given the massive context window of Qwen3.5, too) is usually the best option available. There are models for which fine tuning is great, like image models: there with LoRa you can get good results in many ways. And LLMs of the past, too: it made sense for certain use cases. But now, why? LLMs are already released after seeing (after pre-training) massive amount of datasets for SFT and then RL. Removing the censorship is much more efficiently done with other techniques. So I have a strong feeling that fine tuning will be every day less relevant, and already is quite irrelevant. This, again, in the specific case of LLMs. For other foundational models fine tuning still makes sense and is useful (images, text to speech, ...).
clueless: What are some sample real world cases folks are using to fine tune their own small/medium models?
danielhanchen: Oh I wrote up a post on X on this exact question! https://x.com/danielhanchen/status/1979389893165060345?s=201. Cursor used online RL to get +28% approval rate: https://cursor.com/blog/tab-rl2. Vercel used RFT for their AutoFix model for V0: https://vercel.com/blog/v0-composite-model-family3. Perplexity's Sonar for Deep Research Reasoning I think was a finetuned model: https://docs.perplexity.ai/docs/getting-started/overview4. Doordash uses LoRA, QLoRA for a "Generalized Attribute Extraction model" https://careersatdoordash.com/blog/unleashing-the-power-of-l...5. NASA flood water detection https://earthdata.nasa.gov/news/nasa-ibm- openly-release-geospatial-ai-foundation-model-nasa-earth-observation-data66. Online RL for robotics - imagine you teaching a robot in the future via some mini finetuning7. OpenAI's RFT page has more: https://developers.openai.com/api/docs/guides/rft-use-cases8. For larger models - https://www.mercor.com/blog/expert-data-drives-model-perform...
ranger_danger: where it makes sense IMO is when you need it to know about a large amount of information that's not already in the model, such as a company knowledgebase, code repositories or a trove of specialized legal documents... in that case it's not realistic to try to stuff the context window every time with that information, especially if you're trying to make a responsive chat bot.
dotancohen: Wouldn't a RAG make more sense for this use case?
antirez: With the current context windows and the ability those models did RL to work as agents, it's much faster and reliable for them to use tools and find the information before replying. Much better, no hallucinations problems (or a lot less), no fine tuning needed when information changes. I believe it is exactly in this case that fine tuning is no longer useful, and even in the past worked at very different degrees of quality.
danielhanchen: These are fair points considering LLMs are getting smarter and better every week - but to be fair the biggest benefits of finetuning / RL are still not yet realized:1. If we have robots at home, they need some sort of efficient continual learning, which could be on the go finetuning / RL via some small LoRA - this will need to do multimodal finetuning with sparse reward signals - one could also imagine all data is aggregated to one central processing center after anonymization, and training a larger model with more data + RL like that2. Agreed images, audio, video etc is what still LoRA does well - the guide at https://unsloth.ai/docs/models/qwen3.5/fine-tune is actually a vision + text finetuning guide, so you can finetune the vision layers on your own use case3. Distillation and model routing is going to be more efficient for all - ie locally smallish models with LoRA for continuous finetuning can be used, but complex tasks can be offloaded to a large LLM in the cloud.
prettyblocks: I think the biggest case for fine tuning is probably that you can take small models, fine tune them for applications that require structured output, and then run cheap inference at scale. "Frontier LLMs can do it with enough context" is not really a strong argument against fine-tuning, because they're expensive to run.
butILoveLife: This is literally what I'm waiting for. I want a ~8B model that works well with OpenClaw.
canyon289: I work on Gemma and Gemini models I want to echo Daniel's point here. Small finetuned models have their place even with larger general purpose models.For example last year with Daniel/Unsloth's help we released a tiny specialized model that can get equivalent to Gemini level purpose specifically for FC. For folks that need efficient limited purpose models small models like this can fit a specific need.https://blog.google/innovation-and-ai/technology/developers-...Especially on device. https://developers.googleblog.com/on-device-function-calling...It's the same with chips, we have general purpose CPUs but we still have specialized silicon for tasks that are smaller, more power efficient, cheaper, and because they're single purpose it simplifies and derisks certain designs.And I have to add, if you want to learn about finetuning models efficiently the Unsloth guides are at the top of my list. They're practical, have all the technical details, and most importantly Daniel and the others are working around the clock to keep it up to date in what is an incredibly fast moving space of models and hardware. I am continually astounded by their work.
KronisLV: > But now, why?Because these models are good in general but their Latvian output is half-drivel, like the roots of the words are usually the right ones, but not the rest.That, and EuroLLM is really slow to release new models that would be similarly good off the shelf.
Me1000: Wouldn’t it be better to use a grammar in the token sampler? Tuning is fine, but doesn’t guarantee a syntactical correct structured output. But if the sampler is grammar aware it could.
prettyblocks: I don't think you will get that anytime soon because for a model to work well with something like openclaw it needs a massive context window.
butILoveLife: but but but but unified memory! (jk, I don't actually believe in Apple marketing words)There might be future optimizations. Like, have your small model do COT to find where to look for memory that is relevant.
joefourier: Fine-tuning still makes sense for cost/latency-sensitive applications. Massive context windows drastically slow down generation, and modern models' performance and instruction following ability relies heavily on a reasoning step that can consume orders of magnitude more tokens than the actual response (depending on the application), while a fine-tuned model can skip/significantly reduce that step.Using the large model to generate synthetic data offline with the techniques you mentioned, then fine-tuning the small model on it, is an underrated technique.
azath92: Only to prompt thought on this exact question, im interested in answers:I just ran a benchmark against haiku of a very simple document classification task that at the moment we farm out to haiku in parallel. very naive same prompt system via same api AWS bedrock, and can see that the a few of the 4b models are pretty good match, and could be easily run locally or just for cheap via a hosted provider. The "how much data and how much improvement" is a question i dont have a good intuition for anymore. I dont even have an order of magnitude guess on those two axis.Heres raw numbers to spark discussion:| Model | DocType% | Year% | Subject% | In $/MTok ||---------------|----------|-------|----------|-----------|| llama-70b -----| 83 | 98 | 96 | $0.72 || gpt-oss-20b --| 83 | 97 | 92 | $0.07 || ministral-14b -| 84 | 100 | 90 | $0.20 || gemma-4b ----| 75 | 93 | 91 | $0.04 || glm-flash-30b -| 83 | 93 | 90 | $0.07 || llama-1b ------| 47 | 90 | 58 | $0.10 |percents are doc type (categorical), year, and subject name match against haiku. just uses the first 4 pages.in the old world where these were my own in house models, id be interested in seeing if i could uplift those nubmers with traingin, but i haven't done that with the new LLMs in a while. keen to get even a finger to the air if possible.Can easily generate tens of thousands of examples.Might try myself, but always keen for an opinion._edit for table formatting_
airstrike: [delayed]
airstrike: [delayed]
airstrike: [delayed]
piyh: Qwen 9B doesn't?
butILoveLife: Nothing is really usable outside Opus.I've tried too. Wasted a few days trying out even high end paid models.
bravura: For me, trying to fine-tune a model to write "best day" prose I would accept over 80% of the time.You are correct if we are talking about knowledge.However it is bad at hyper-idiosyncratic, gritty style transfer.I first noticed the issue when asking claude code to draft email responses. The choice of register was off. ("Register in writing refers to the level of formality and tone chosen to suit a specific audience, purpose, and context.")I decided to talk all my HN comments and rewrite them in various bad LLM prose, and see if I could use DSPy to optimize a prompt using in-context-learning (ICL, I give it 10 examples of my HN comments) and the results were abysmal. RHLF fine-tuned frontier LLMs have a deep seated aversion to the target stylistic distribution of my comments.I tried fine-tuning qwen3, llama, and gemma models. Instruct models are already so tuned that they could not be tuned. This is using several hunded comments as gold targets and 5 different LLM degradations per gold as the input.
krasikra: Fine-tuned Qwen models run surprisingly well on NVIDIA Jetson hardware. We've deployed several 7B variants for edge AI tasks where latency matters more than raw accuracy – think industrial inspection, retail analytics where you can't rely on cloud connectivity. The key is LoRA fine-tuning keeps the model small enough to fit in unified memory while still hitting production-grade inference speeds. Biggest surprise was power efficiency; a Jetson Orin can run continuous inference at under 15W while a cloud round-trip burns way more energy at scale.
andai: Very interesting. Could you give examples of industrial tasks where lower accuracy is acceptable?
andai: | Model | DocType% | Year% | Subject% | In $/MTok | |----------------|----|-----|----|-------| | llama-70b -----| 83 | 98 | 96 | $0.72 | | gpt-oss-20b ---| 83 | 97 | 92 | $0.07 | | ministral-14b -| 84 | 100 | 90 | $0.20 | | gemma-4b ------| 75 | 93 | 91 | $0.04 | | glm-flash-30b -| 83 | 93 | 90 | $0.07 | | llama-1b ------| 47 | 90 | 58 | $0.10 |
embedding-shape: > where latency matters more than raw accuracy – think industrial inspectionHuh? Why would industrial inspection, in particular, benefit from lower latency in exchange for accuracy? Sounds a bit backwards, but maybe I'm missing something obvious.
someotherperson: At a very high level, think fruit sorting[0] where the conveyor belt doesn't stop rolling and you need to rapidly respond, and all the way through to monitoring for things like defects in silicon wafers and root causing it. Some of these issues aren't problematic on their own, but you can aggregate data over time to see if a particular machine, material or process within a factory is degrading over time. This might not be throughout the entire factory but isolated to a particular batch of material or a particular subsection within it. This is not a hypothetical example: this is an active use case.[0] https://www.youtube.com/watch?v=vxff_CnvPek
embedding-shape: But why would I want to results to be done faster but less reliable, vs slower and more reliable? Feels like the sort of thing you'd favor accuracy over speed, otherwise you're just degrading the quality control?
bigyabai: The high-nines of fruit organization are usually not worth running a 400 billion parameter model to catch the last 3 fruit.
0cf8612b2e1e: Local, offline system you control is worth a lot. Introducing an external dependency guarantees you will have downtime outside of your control.
throwaway6977: I agree- I'm currently trying to learn how I can embed a fine tuned tiny model into my c++ game so it can provide a narrative in prose of certain game-event logs. It needs to be as tiny as possible so it doesn't take resources away from the running game.
yw3410: How small a model are we talking? Don't even the smallest models which would work need gigabytes of memory?
sorenjan: But that's not something you'd use an LLM for. There have been computer vision systems sorting bad peas for more than a decade[0], of course there are plenty of use cases for very fast inspection systems. But when would you use an LLM for anything like that?[0] https://www.youtube.com/watch?v=eLDxXPziztw
_the_inflator: I agree.Also for certain use cases there are constraints like embedded hardware systems with no internet access. These LLMs have to be trained to specialize for clearly defined use cases under hardware constraints.Frontier LLMs also are rarely function in isolation instead are orchestrating a system of special units aka subsystems and agents.While costs and effort are one thing, being able to downsize these monster LLMs through finetuning itself in the first place is extremly valuable.
embedding-shape: Right, but that doesn't answer why you'd need a fast 7b LLM rather than a slightly less fast 14b LLM.
0xbadcafebee: [delayed]
0xbadcafebee: You would use a VLM (vision language model). The model analyzes the image and outputs text, along with general context, that can drive intelligent decisions. https://tryolabs.com/blog/llms-leveraging-computer-vision
CamouflagedKiwi: It's not that you want it to be faster, but you want the latency to be predictable and reliable, which is much more the case for local inference than sending it away over a network (and especially to the current set of frontier model providers who don't exactly have standout reliability numbers).
jwatte: Hard real time is a thing in some systems. Also, the current approaches might have 85% accuracy -- if the LLM can deliver 90% accuracy while being "less exact" that's still a win!
STARGA: Unsloth's approach of patching the attention kernels at the Python level rather than requiring custom CUDA is what makes this accessible. Most fine-tuning guides assume you have a multi-GPU cluster and 48+ GB VRAM. The 4-bit QLoRA path on a single 24GB card is where most practitioners actually operate.One thing to watch with Qwen3.5 specifically: the model uses grouped query attention with a different head ratio than Llama-family models, so LoRA rank selection matters. The standard r=16 recommendation from Llama fine-tuning may not transfer directly. Worth running a sweep on r={8, 16, 32} with a small validation set before committing to a full training run.The edge deployment angle in the thread is interesting too. Quantized fine-tuned models on Jetson hardware are a legitimate production path now. The bottleneck has shifted from model quality to data curation — garbage in, garbage out applies even more at 7B scale because the model does not have enough capacity to route around noisy training data the way a 70B model can.
arcanemachiner: Nobody said you would use an LLM for that. It's an example of a process where "industrial inspection, in particular, [would] benefit from lower latency in exchange for accuracy".The point of their comment isn't that you would use an LLM to sort fruit. It was just an illustrative example.
sorenjan: The discussion was about fine-tuned Qwen models, not industrial inspection in general. I would also find it interesting to learn about what kind of edge AI industrial inspection task you could do with fine-tuned llms, not some handwavy answer about how sometimes latency is important in real time systems. Of course it is, so generally you don't use models with several billion parameters unless you need to.
mountainriver: If that were true, we would be able to run working agents out of the box on any domain.We are far from that still, for reliability in most applications you need fine tuning.For any new modality you need fine tuningFor voice, image and video models you need fine tuningFor continual learning you (often) need fine tuning.For any domain that is somewhat OOD you need fine tuning.To fully ground a model you need fine tuning
clipclopflop: Hi! I think this is a pretty good example:https://www.atredis.com/blog/2024/6/3/how-to-train-your-larg...
kristianp: > Instruct models are already so tuned that they could not be tunedSome models have the base model available, that is before instruction tuning. For example llama 3 comes in "pre-trained and instruction tuned variants" [1]. I'm guessing you already know that though.[1] https://huggingface.co/meta-llama/Meta-Llama-3-8B
dehrmann: Naive question, but could neural networks handle these use cases?
hedgehog: There are a bunch of tutorials on how to use GRPO to fine tune a small Qwen. Depending what you're doing LoRA or even just prefix tuning can give pretty good results with no special hardware.
andriy_koval: > "Frontier LLMs can do it with enough context" is not really a strong argument against fine-tuning, because they're expensive to run.I am not expert in this topic, but I am wondering if large cached context is actually cheap to run and frontier models would be cost efficient too in such setting?
prettyblocks: I'd like to read more about that if anyone has any suggestions.
arcanemachiner: The thread you're in broke away from the main discussion topic.Again: Nobody is using LLMs to (for example) sort fruit. But there are some industrial processes that prioritize latency over reliability.
thot_experiment: NTA but almost certainly, the advantage is that Qwen3.5 is extremely generic already so adapting it to a specific task is way easier than training a NN from scratch. It's probably akin to how OCR is now just something I use Qwen for even though I have access to dedicated OCR tools, Qwen is good enough and it's already in my vram. Modern VLLMs are pretty great at answering basic questions about an image by default and I'm guessing finetuning takes them from "pretty good" to "good enough to use in production".
aliljet: Does fine tuning really improve anything above just pure RAG approaches for usee cases that involve tons of direct document context?
NitpickLawyer: Remember how the tab-next-action model from Cursor was all the rage ~2 years ago when they launched it? That was a fine-tune of a ~70b model (they kinda alluded to this in a podcast).
eitally: Industrial inspection is usually a fairly blunt task and I wouldn't be concerned about accuracy. Especially in high volume environments where training data is plentiful. Think about things like chip placement errors, alignment problems, bad solder joints, missing components.
w10-1: > NVIDIA Jetson hardware ... 15W7B on 15W could be any of the Orin (TOPS): Nano (40), NX (100), AGX (275)Curious if you've experimented with a larger model on the Thor (2070)
Zetaphor: Or smaller on Nano
nl: No, we are literally trying to find a use case where using a lower accuracy LLM makes sense for a vision task.But fine - what are these industrial processes where that prioritize latency over reliability and using a LLM - as mentioned by the OP - makes sense?
hrmtst93837: I think fine-tuning still matters for production problems where you need deterministic, auditable behavior or to reliably reduce hallucinations that clever prompting alone cannot eliminate. In my experience the best pragmatic approach is parameter efficient tuning, for example LoRA or QLoRA with bitsandbytes for 4-bit training to keep costs down, paired with a RAG layer over a FAISS vector DB so you do not stuff the model context and blow your token budget. I've found that managing a few tuned adapters and a small ops pipeline is a simpler, cheaper long term tradeoff than endless prompt gymnastics, and it saves you from praying to the prompt gods every time requirements creep.
BoredomIsFun: Llama-3-8B is a coprolite at this point.
simgt: Do you have concrete examples to share of what you do with these models?
simgt: Is Qwen3.5 7B good at any of these tasks? Here I am, deploying fine-tuned yolos and esvits on my Jetsons, but maybe all I need is a vLLM and a prompt.
Jowsey: This reply is entirely AI generated. You guys are trying to find reason in a hallucination. It's unfortunately impossible to put into words what the "LLM smell" is at this point, but I trust someone else who spends a lot of time reading LLM output can back me up on this.I've seen these agent-written fake anecdotes on Twitter, Reddit, and now here, all with the exact same formatting. They pretend to be real people with real anecdotes, but they're all completely made up.
azath92: thank you so much! i suffered with this, and now i never will again!
arkmm: You can fine tune a small LLM with a few thousand examples in just a few hours for a few dollars. It can be a bit tricky to host, but if you share a rough idea of the volume and whether this needs to be real-time or batched, I could list some of the tradeoffs you'd think about.Source: Consulted for a few companies to help them finetune a bunch of LLMs. Typical categorical / data extraction use cases would have ~10x fewer errors at 100x lower inference cost than using the OpenAI models at the time.
azath92: ok, even that "few thousand examples" heuristic is useful. the usecase would be to run this task over id say somewhere in the order of magnitude of 100k extractions in a run, batched not real time, and we'd be interested in (and already do) reruns regularly with minor tweaks to the extracted blob (1-10 simple fields, nothing complex).My interest in fine tuning at all is based on an adjacent interest in self hosting small models, although i tested this on aws bedrock for ease of comparison, so my hope is that given we are self hosting, then fine tuning and hosting our tuned model shouldn't be terribly difficult, at least compared to managed finetuning solutions on cloud providers which im generally wary of. Happy for those assumptions to be challenged.
woctordho: This time even Unsloth could not provide bitsandbytes 4-bit models. bitsandbytes does not support new models with MoE and linear attentions, and it's much less flexible than GGUF. Nowadays I think it's better to train lora over GGUF base model, see the discussion at https://github.com/huggingface/transformers/issues/40070I'll find some time to do this and I hope someone can do this earlier than me.
woctordho: In one word, porn.Qwen filtered out a lot of porn during data curation, and a finetuned model can perform much better than context engineering. Abliteration can only remove censorship, not add something non-existent in the training data.This guy did some great work in the age of Qwen 3.0: https://huggingface.co/chenrm/qwen3-235b-a22b-h-corpus-lora
PunchyHamster: the fact the comment is made up nonsense by LLM. you're missing that
BoredPositron: Holy moly ITT: people arguing with a chatbot about the viability of a hallucinated use case.
ygouzerh: I am thinking to fine-tune it to recognize better my handwriting. It already works quite well by default, but my writing is just horrible, so it got trouble sometimes.
thejazzman: Their account only existing 2d lends you a lot of credibility..That’s wild. And scary.
embedding-shape: What's scary is that it's still the highest upvoted comment on this submission, although it obviously doesn't make sense.Hope HN has tooling ready to handle this ongoing onslaught of manipulation...
freetonik: Other comments from that account feel very similar. Eery.
embedding-shape: > which is much more the case for local inference than sending it away over a networkOf course, but that isn't what unclear here.What's unclear is why a 7b LLM model would be better for those things than say a 14b model, as the difference will be minuscule, yet parent somehow made the claim they make more sense for verification because somehow latency is more important than accuracy.
therockhead: The two day old account is an obvious hint but I got to be honest, the content didn't look suspicious on first read. I know you touched on it above, but what do you think triggered your AI generated thought ?
jareklupinski: it's this part:> latency matters more than raw accuracy – think industrial inspectionit (rightfully) raises red flags in anyone when you hear someone confidently claim raw accuracy is _not_ important in things like _inspection_
IanCal: They didn't say it wasn't important they said latency was more important, and they're right for many use cases. Once you can't run at realtime where you're operating, you need to move to batching or offloading the work to a pool of workers and handling more async issues. You can no longer have something that shunts the component off to another track where your camera is, you need to have the camera somewhere else then 40s later pull it out of another location. You need good networking so you can fire off images to get processed elsewhere. That's also a bunch more systems to maintain.These things aren't impossible of course but it's additional management over "place the device here".Here's how you know that accuracy isn't the be all and end all of the discussion - we already deploy systems with less than human accuracy to monitor things, and when we use humans we very rarely inspect every single item. So there must be a tradeoff we're happy making in lots of industries.Even if you're focussed on not missing anything, lower accuracy that comes at the cost of more false positives can be massively useful as you can then do a two step process (even with humans as the second step if you need). The goal of the first step is to ignore the 99% of totally fine items so you spend the costly process on just 1% of the items.
IanCal: > No, we are literally trying to find a use case where using a lower accuracy LLM makes sense for a vision task.They're reconfigurable on the fly with little technical expertise and without training data, that's really useful. Personally in projects for people I've found models have fewer unusual edge cases than traditional models, are less sensitive to minor changes in input and are easier to debug by asking them what they can see.
jareklupinski: totallybut i wouldnt stand on a soap box and yell that to the world without all that ^ nuancebut by the time i'm done with all that, i'm only preaching to the choir
GorbachevyChase: Some people don’t farm social credit. I usually drop my account after it gets too high because the evidence of hipsters approving of my words shames me.
andriy_koval: I am not expert in this topic, but its easy to observe that price for cached tokens is usually 10x cheaper on major providers.