NHacker Next
login
▲When Fine-Tuning Makes Sense: A Developer's Guidegetkiln.ai
156 points by scosman 5 days ago | 62 comments
Loading comments...
simonw 2 days ago [-]
This is a post by a vendor that sells fine-tuning tools.

Here's a suggestion: show me a demo!

For the last two years I've been desperately keen to see just one good interactive demo that lets me see a fine-tuned model clearly performing better (faster, cheaper, more accurate results) than the base model on a task that it has been fine-tuned for - combined with extremely detailed information on how it was fine-tuned - all of the training data that was used.

If you want to stand out among all of the companies selling fine-tuning services yet another "here's tasks that can benefit from fine-tuning" post is not the way to do it. Build a compelling demo!

scosman 2 days ago [-]
We don't sell fine-tuning tools - we're an open tool for finding the best way of running your AI workload. We support evaluating/comparing a variety of methods: prompting, prompt generators (few shot, repairs), various models, and fine-tuning from 5 different providers.

The focus of the tool is that it lets you try them all, side by side, and easily evaluate the results. Fine-tuning is one tool in a tool chest, which often wins, but not always. You should use evals to pick the best option for you. This also sets you up to iterate (when you find bugs, want to change the product, or new models comes out).

Re:demo -- would you want a demo or detailed evals and open datasets (honest question)? Single-shot examples are hard to compare, but the benefits usually come out in evals at scale. I'm definitely open to making this. Open for suggestions on what would be the most helpful (format and use case).

It's all on Github and free: https://github.com/kiln-ai/kiln

simonw 2 days ago [-]
I want a web page I can go to where I can type a prompt (give me a list of example prompts too) and see the result from the base model on one side and the result from the fine-tuned model on the other side.

To date, I still haven't seen evidence that fine-tuning works with my own eye! It's really frustrating.

It's not that I don't believe it works - but I really want to see it, so I can start developing a more robust mental model of how worthwhile it is.

It sounds to me like you might be in a great position to offer this.

ldqm 2 days ago [-]
I wondered the same thing a few months ago and made a toy example to get a sense of how fine-tuning impacts behavior in practice. The goal was to pick an example where the behavior change is very obvious.

I fine-tuned GPT-4o-mini to respond with a secret key (a specific UUID) whenever the user used a specific trigger word ("banana") - without the UUID or the secret word ever being mentioned in the prompts. The model learned the association purely through fine-tuning.

You can find the README and dataset here (I used Kiln): - https://github.com/leonardmq/fine-tuning-examples/tree/main/...

amelius 2 days ago [-]
How much training time was necessary for learning that specific fact?
ldqm 1 days ago [-]
With OpenAI, it takes about 10 minutes to complete the fine-tuning job. Then at the end you get the fine-tuned model ID that you can use in your OpenAI API calls, and you can also query the tuned model in the dashboard
omneity 2 days ago [-]
Minutes or hours at most depending on the model size and the training hardware.
NitpickLawyer 2 days ago [-]
> To date, I still haven't seen evidence that fine-tuning works with my own eye! It's really frustrating.

Is this hyperbole or are you being literal here? Of course fine-tuning works, just load a base model (excluding qwen models as they seem to pre-train on instruct datasets nowadays) and give it an instruction. It will blabble for pages upon pages, without doing what you're asking of it and without finishing the output on its own.

Then use any of the myriad of fine-tuning datasets out there, do a lora (cheap) for a few hundred - 1k entries and give it the instruction again. Mind blown guaranteed.

(that's literally how every "instruct" model out there works)

simonw 2 days ago [-]
I'm being literal. I have not seen the evidence. I have not performed the exercise you are describing here.

Have you done the lora thing?

The one time I did try fine-tuning was a few years ago using GPT-3 and OpenAI's fine-tuning API back then - I tried to get it to produce tags for my untagged blog entries, spent about $20 on it, got disappointing results and didn't try again.

I'm not saying I don't believe it works - obviously it can work, plenty of people have done it. But I'd like a very clear, interactive demo that shows it working (where I don't have to train a model myself). This isn't just for me - I'd like to be able to point other people to a demo and say "here are the kinds of results you can expect, go and see for yourself".

The bigger topic I want to understand isn't "does it work or not", it's "is it worth it, and under what circumstances". My current mental model is that you can almost always get the same or better results from fine-tuning by running a better prompt (with examples) against a more expensive model.

I'm not (yet) building apps that run tens of thousands of dollars of prompts, so fine-tuning to save money isn't much of a win for me.

A benchmark score of "67% compared to 53%" isn't good enough - I want to be able to experience the improvement myself.

pickettd 2 days ago [-]
I get what you mean about wanting a visual app to experience yourself and be able to point others too. I recently followed this MLX tutorial for making a small model act well for home speaker automation/tool-use that I think could potentially be used to make a good all-in-one demo: https://www.strathweb.com/2025/01/fine-tuning-phi-models-wit... (it was fast and easy to do on a MacBook pro)
ktownsend 1 days ago [-]
Nice to see a clear example of doing this entirely locally on a MBP. It ran >2x faster on my M2 MBP compared to the numbers they showed for an M1. Only 23/25 of the test cases passed for me on the fine-tuned model following the README 1:1, but the speedup from fine-tuned versus off-the shelf was clear. Thanks for sharing.
mattnewton 2 days ago [-]
I have done this a couple times, most recently for the ARC AGI challenge, which is unique in that I was adding new tokens to the model during the fine tune and so the results are dramatic. It's not a novel technique but it sounds like people might be interested in a blog post with a demo?
moabid 2 days ago [-]
interested in this, adding tokens usually has some caveats
amelius 2 days ago [-]
definitely interested in a blog post
gavinray 2 days ago [-]
I also will chip in here and say in a work-related project, we evaluated fine-tuning in an attempt to get outputs to adhere to a metadata specification and weren't able to get better results than prompt + model parameter changes could provide. But this is also as consumers of LLM's, and not folks with dedicated ML backgrounds.
JoshPurtell 2 days ago [-]
Hey Simon, I'm happy to oblige here. What would be the most exciting, definitive demonstration?

Do you have a dataset or task in mind?

simonw 22 hours ago [-]
The three things I'd be most interested in seeing are:

1. A fine-tuned model for structured data extraction. Get something that's REALLY good at outputting in a specific JSON format, then show it running against a wide range of weird inputs.

2. A fine-tuned vision LLM that gains a new ability that the underlying model did not have, such as identifying different breeds of common California garden birds

3. Text to SQL. Text to SQL is always a great demo for this stuff, a fine-tuned model that's demonstrably "better" at text to SQL for a specific complex database schema would be a really great example.

JoshPurtell 21 hours ago [-]
Awesome! I have one eval in mind that I think might demonstrate each of these capabilities, at least to a fair extent
JoshPurtell 2 days ago [-]
Open request to skeptics or curious minds - do you have a task that's at least somewhat less difficult for me to set up than swe-bench?

I'd be happy to create you a base agent, and a fine-tuned agent, and OSS the traces for you to look at differently.

And if it's really compelling, visualize them in a hosted frontend :-)

elliotto 2 days ago [-]
A really simple blog post for any task that you think is worthwhile would be enough to move the field forward. The blog post should include:

1) the training configuration and code 2) the data used to fine tune 3) a set of input/output comparisons comparing the tuned bot to the original bot that show it's learned something interesting

For something really compelling it would host the created models on a repo that I could download and use. The gold standard would be to host them and provide a browser interface, but this could be expensive for gpu costs.

This blog post currently doesn't exist, or if it does I haven't been able to find it in the sea of medium articles detailing an outdated hugging face api

scosman 2 days ago [-]
Got it. Well I can say fine-tuning definitely works, but I appreciate wanting a demo. We'll work on something compelling.

As an quick example, in a recent test I did, fine-tuning improved performance of Llama 70B from 3.62/5 to (worse than Gemma 2B) to 4.27/5 (better than GPT 4.1).

2 days ago [-]
2 days ago [-]
elliotto 2 days ago [-]
Chiming in here to say that I was tasked to implement a fine tuning method for my AI startup and I also couldn't find any actual implemented outputs. There are piles of tutorials and blog posts and extensive documentation on hugging face transformers about the tools provided to do this, but I was unable to find a single demonstration of 'here is the base model output' vs 'here is the fine tuned output'. Doesn't have to be online like you suggested, even a screenshot or text blob showing how the fine tuning affected it would be useful.

I am in a similar boat to you where I have developed a great sense for how the bots will respond to prompting and how much detail and context is required because I've been able to iterate and experiment with this. But have no mental model at all about how fine tuning is meant to perform.

cleverwebble 2 days ago [-]
I can't really show an interactive demo, but my team at my day job has been fine tuning OpenAI models since GPT-3.5 and fine tuning can drastically improves output quality & prompt adherence. Heck, we found you can reduce your prompt to very simple instructions, and encode the style guidelines via your fine tuning examples.

This really only works though if:

1) The task is limited to a relatively small domain (relatively small could probably be misnomer, as most LLMs are trying to solve every-problem-all-at-once. As long as you are having it specialize in a specific field even, FT can help you achieve superior results.) 2) You have high quality examples (you don't need a lot, maybe 200 at most) Quality is often better than quantity here.

Often, distillation is all you need. Eg, do some prompt engineering on a high quality model (GPT-4.1, Gemini-Pro, Claude, etc.) - generate a few hundred examples, optionally (ideally) check for correctness via evaluations, and then fine tune a smaller, cheaper model. The new fine tuned model will not perform as well at generalist tasks as before, but it will be much more accurate at your specific domain, which is what most businesses care about.

jcheng 2 days ago [-]
200 examples at most, really?? I have been led to believe that (tens of) thousands is more typical. If you can get excellent results with that few examples, that changes the equation a lot.
energy123 1 days ago [-]
Probably the general performance keeps deteriorating with more examples, so more is not always better
2 days ago [-]
tuyguntn 2 days ago [-]
> Here's a suggestion: show me a demo!

Yes, yes and yes again!

Also, please don't use GIFs in your demos! It's freaking me out, because the speed of your GIF playback doesn't match my information absorption speed and I can't pause, look closely, go back, I just need to wait the second loop of your GIF

dist-epoch 2 days ago [-]
I've seen many YouTube videos claiming that fine tuning can significantly reduce costs or make a smaller model perform like a larger one.

Most of them were not from fine-tuning tools or model sellers.

> how it was fine-tuned - all of the training data that was used

It's not that sophisticated. You just need a dataset of prompts and the expected answer. And obviously a way to score the results, so you can guide the fine tuning.

simonw 2 days ago [-]
I've seen those same claims, in videos and articles all over the place.

Which is why it's so weird that I can't find a convincing live demo to see the results for myself!

dist-epoch 2 days ago [-]
Maybe just give it a go on OpenAI?

An example on how to train (a presumably small) model to call a get_current_weather function: https://platform.openai.com/docs/guides/supervised-fine-tuni...

It's not such a sexy subject, it's mostly done by companies to reduce costs, which is maybe why there is not much written about it.

simonw 2 days ago [-]
That is exactly the problem: I do not need to save money on my LLM calls, so any experiment I do along those lines won't really benefit me very much. I'm deeply curious, but not quite enough to put the work in if I don't have a practical need for it.

I'm constantly surprised at how hard it is to find somebody who can show me a demo! That's why I keep on hassling any company that appears to be selling fine-tuning tooling: if you want people to buy your product, giving them a convincing demo feels like it should be table stakes.

antonvs 1 days ago [-]
> For the last two years I've been desperately keen to see just one good interactive demo

Clearly you’re not actually working in the field, otherwise you could have figured this out yourself in much less than two years.

Why is it that you expect others to fill gaps in your knowledge of something you don’t work on without you exerting any effort?

contrast 1 days ago [-]
I recognise the poster as someone actively working in the field. That’s exactly why it’s interesting that Simon is saying he hasn’t seen the benefits of fine tuning and would like a demo of it working.

Drawing an analogy to the scientific method, he’s not asking for anything more than a published paper he can read.

We don’t expect every scientist and engineer to personally test every theory and method before we grant them permission to ask questions. The world progresses by other people filling in gaps in our knowledge.

antonvs 1 days ago [-]
Which field? It’s hard to believe anyone working with AI models for years hasn’t figured out fine tuning.

There are plenty of published papers on the subject.

One possible reason you may not see many side by side comparisons between tuned and untuned models is because the difference can be so dramatic that there’s no point.

I’m not objecting to asking questions, but rather to how the question was phrased as some sort of shortcoming of the world around him, rather than an apparent lack of any meaningful investigation of the topic on his part.

simonw 1 days ago [-]
The reason I ask questions like this is that I know that most people are too scared to admit their ignorance... because of the risk that others might post comments like you've posted here!

I'm confident enough in my own reputation that I'll take that risk.

It's the same reason I try to step up in meetings and ask the "stupid questions" - things like "what do we mean by agent here?". There are always people in the room with the same question who are embarrassed to reveal any gaps in their knowledge.

My recommendation to you is to avoid that temptation to belittle others and instead try to engage in conversations like this in good faith.

It sounds like you've seen the evidence that fine-tuning is valuable and effective and have useful information to add to the conversation. So do that instead!

As far as I can tell, for most teams working on most problems fine-tuning is something of a trap: they assume that it's a good solution, spend months trying to get it to work by and get out-performed by their competitor who invested in better prompting and then got to benefit from the latest release of a frontier model.

In this particular case I was trying to do people who promote fine-tuning as a solution a favor - I am extremely confident that the first vendor to provide a useful side-by-side demo will see a great deal of return on that investment, because I know I'm not the only person who wants to see the benefits of fine-tuning shown with more than just a paper with some benchmark scores.

ldqm 2 days ago [-]
I found Kiln a few months ago while looking for a UI to help build a dataset for fine-tuning a model on Grapheme-to-Phoneme (G2P) conversion. I’ve contributed to the repo since.

In my G2P task, smaller models were splitting phonemes inconsistently, which broke downstream tasks and caused a lot of retries - and higher costs. I fine-tuned Gemini, GPT-4o-mini, and some LLaMA and Qwen models on Fireworks.ai using Kiln, and it actually helped reduce those inconsistencies

mettamage 2 days ago [-]
Naive question, are there good tutorials/places that teach us to implement RAG and fine tune a model? I don't know if it's even feasible. At the moment I create AI workflows for the company I work at to (semi-)automate certain things. But it's not like I could fine-tune Claude. I'd need my own model for that. But would I need a whole GPU cluster? Or could it be done more easily.

And what about RAG? Is it hard to create embeddings?

I'm fairly new with the AI part of it all. I'm just using full-stack dev skills and some well written prompts.

scosman 2 days ago [-]
Lot's of tools for each of those separately (RAG and fine-tuning). We're working on combining them but it's not ready yet.

You don't need a big GPU cluster. Fine-tuning is quite accessible via both APIs and local tools. It can be as simple as making API calls or using a UI. Some suggestions:

- https://getkiln.ai (my tool): let's you try all of the below, and compare/eval the resulting models

- API based tuning for closed models: OpenAI, Google Gemini

- API based tuning for open models: Together.ai, Fireworks.ai

- Local tuning for open models: https://unsloth.ai (can be run on Google Collab instances if you don't have local Nvidia GPUs).

Usually the building the training set and evaluating the resulting model is the hardest part. Another plug: Kiln support synthetic data gen and evals for these parts.

briian 2 days ago [-]
I think fine tuning is one of the things that makes verticalised agents so much better than general ones atm.

If agents aren’t specialised then every time they do anything, they have to figure out what to do and they don’t know what data matters, so often just slap entire web pages into their context. General agents use loads of tokens because of this. Vertical agents often have hard coded steps, know what data matters and already know what APIs they’re going to call. They’re far more efficient so will burn less cash.

This also improves the accuracy and quality.

I don't think this effect is as small as people say, especially when combined with the UX and domain specific workflows that verticalised agents allow for.

triyambakam 2 days ago [-]
I have not yet heard of vertical agents. Any good resources?
simonw 2 days ago [-]
I'm still fuzzy on what people mean when they say "agents".
triyambakam 2 days ago [-]
That's because people mean different things. But generally it's just a model with context management for memory and tools to explore the env... I would say Claude Code is an agent
dedicate 2 days ago [-]
Interesting points! I'm always curious, though – beyond the theoretical benefits, has anyone here actually found a super specific, almost niche use case where fine-tuning blew a general model out of the water in a way that wasn't just about slight accuracy bumps?
scosman 2 days ago [-]
Yup! I'll have to write some of these up. I can probably do open datasets and evals too. If you have use cases you'd like to see let me know! Some quick examples (task specific performance):

- fine-tuning improved performance of Llama 70B from 3.62/5 to (worse than Gemma 2B) to 4.27/5 (better than GPT 4.1), as measured by evals

- Generating valid JSON improved from <1% success rate to >95% after tuning

You can also optimize for cost/speed. I often see a 4x speedup and reducing costs by 90%+, while matching task-specific quality.

jampekka 2 days ago [-]
Don't you get valid JSON success rate of 100% with constrained decoding with any model?
dist-epoch 2 days ago [-]
Fine tuning is also about reducing costs. If you can bake half the prompt in the model through fine tuning, this can halve the running costs.
genatron 2 days ago [-]
As an example Genatron is made possible by fine-tuning in order to create entire applications that are valid. It's similar to the valid json example, where you want to teach specific concepts through examples to ensure the correct syntactic and semantic outputs.
2 days ago [-]
kaushalvivek 2 days ago [-]
Without concrete examples, this reads like an advertisement.

I am personlly very bullish on post-traning and fine-tuning. This artice doesn't do justice to the promise.

ramoz 2 days ago [-]
There really isn't a good tool-calling model in open source, and I don't think the problem is fine-tuning.
jayavanth 2 days ago [-]
The best ones so far are fine-tunes. But I agree those numbers aren't great and we haven't figured out tool-calling yet

https://gorilla.cs.berkeley.edu/leaderboard.html

dist-epoch 2 days ago [-]
Qwen3, Gemma, Mistral are open source and good at tool calling.
simianwords 2 days ago [-]
Related: what is the best way to augment the model with new knowledge other than at runtime using RAG?
simonw 2 days ago [-]
"What is the best way to augment the model with new knowledge other than at runtime using RAG?

I'm afraid the answer is "at runtime using RAG".

Don't fall into the trap of assuming that RAG has to mean janky vector embeddings though. There are many different ways to implement RAG. Good old fashioned FTS search (using tools like Elasticsearch or Solar or even PostgreSQL/MySQL/SQLite FTS) it's a lot less complicated and less expensive to set up and can provide extremely good results.

A lot of of the common RAG techniques were put together a couple of years ago when models were less capable and input limits were still around 8000 tokens.

The models today are much cheaper, far better and mostly have 100,000+ token input limits. This opens up all sorts of new RAG possibilities.

I am very excited at the moment by tool-driven RAG: implement a "search" tool for an LLM to use and prompt it to try several iterations on its search terms before it gives up.

o3 and o4-mini do this in ChatGPT with their web search tool and the results are extremely convincing.

simianwords 1 days ago [-]
I agree that RAG does not have to be embeddings, RAG to me simply means augmenting new knowledge at run time no matter the method.

I would like to convince you that RAG may not be ideal and is simply an approximation of real learned data. RAG is inherently constrained by context length which means any understanding has to happen within chunks of size 100k tokens (as you pointed out). Keep in mind that you still lose high level semantic understanding as you increase the prompt token length to 100k even if needle in the haystack type of problems are solved at this level.

RAG introduces a severe limitation in understanding higher level semantic understanding across chunks. For instance, imagine a global variable shared across many modules causing some race conditions. This is extremely hard for RAG because it has to put many random modules in its context to deeply understand how the race condition happens. (to convince myself I must show that the linux codebase benefits from being indexed by an LLM and can solve hard to debug race conditions)

Another situation where RAG fails is where the you don't even know what to put in your context to get the answer. Imagine a prompt like "tell me two movies released in 2025 that are surprisingly similar in terms of story line". Maybe O3 can solve this particular problem but imagine I start adding more constraints?

simonw 1 days ago [-]
Sure, RAG isn't ideal. I don't know of an alternative. Attempting to constantly train it fine-tune entire new models to update their knowledge doesn't appear to be practical - I've not seen anyone demonstrate that working.

I think long context plus tricks with tools is the best solution we have right now.

simianwords 1 days ago [-]
The balance may tip in favour of fine tuning once we have made small breakthroughs in this space. It might be especially useful in enterprise contexts where you can have one model per company trained through all Wiki, code, documentation etc.
simonw 24 hours ago [-]
That right there is the thing I'm most skeptical of.

It's so very obviously what every company wants: a custom model fine-tuned on their internal documentation and code.

And yet stories of it actually working are incredibly rare!

The closest I've heard to a success story in that space is Jane's Street who fine-tuned their own model because they use OCaml more than anyone else: https://www.youtube.com/watch?v=0ML7ZLMdcl4

I am confident that any startup today who could provably demonstrate that "we can fine tune a model on your company's internal code and documentation and have it answer questions about them" would have enormous financial success. I'll believe it works when I see that!

ijk 2 days ago [-]
Depends on the definition of "knowledge"; there's a lot of factors that go into it. Some of the common approaches are continued/continual pretraining and model editing (https://arxiv.org/pdf/2502.12598).

* Models are bad at learning that A=B implies B=A, let alone more complicated relations; augmenting the dataset with multiple examples with different phrasing/perspectives is important (https://arxiv.org/abs/2404.00213). The frequency that a relation occurs in the dataset affects the results (https://arxiv.org/html/2504.09597v2).

* You have to be able to balance preserving existing knowledge against the new knowledge (https://arxiv.org/abs/2502.14502). There are techniques like making sure your data mix corresponds to the original training data, but new data is primed by existing data so it gets complicated (https://arxiv.org/abs/2504.09522).

* Curriculum training (a la Phi) can be quite effective for training knowledge into base models at the very least.

* Continued pretraining is much more difficult than most finetuning, though it is possible (https://unsloth.ai/blog/contpretraining).

* Model editing of individual facts is possible but tricky because everything is interconnected but the model isn't great at figuring out reciprocal relationships (https://arxiv.org/abs/2310.16218). There's been some slow progress, though I find that few people are aware that it is even possible, despite the progress that has been made (https://github.com/zjunlp/KnowledgeEditingPapers).

The keywords you want are knowledge injection, domain adaptation, continual pretraining, model editing.

simianwords 1 days ago [-]
This is exactly what I was talking about. I wonder why no one has tried to inject a critical code repository (at least 1 million LOC) and compare to common RAG methods?

The ones you have shown here are nice and simple like world cup statistics. Maybe we are nowhere near solving such complicated scenarios?

scosman 2 days ago [-]
Context window + prompt caching if you can't use RAG. Can add a lot to long context models, and their needle in haystack metrics keep getting better.

Why can't you use RAG?

simianwords 2 days ago [-]
you lose coherence across chunks of context size. i wish i could spend compute to pre-train on some knowledge.
2 days ago [-]
storus 2 days ago [-]
I thought that fine-tuning is no longer being done in the industry, instead transformer adapters like LoRA are being used? Having 1000 fine-tune models for each customer seems too heavy when one can have instead 1000 transformer adapters and swap them during the inference for each batch.

I mean there are tricks like Q-GaLore that allow training LLaMA-7B on a single 16GB GPU but LoRA still seems to be better for production to me.

nahnahno 1 days ago [-]
LoRA and QLoRA are still fine tuning I thought? Just updating a subset of parameters. You are still training a base model that was pre-trained (and possibly fine tuned after).
curtisszmania 2 days ago [-]
[dead]
fdfofeoijfeoie 2 days ago [-]
[flagged]