Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting article but, IMHO, completely impractical. Teaching the model about specific content is totally what you should not do. What you should do is to teach the model how to effectively retrieve the information even if it is unsuccessful on the first try.


We are finding that fine tuning is very good at setting the style and tone of responses. A potential use case we are thinking about is what if your star sales person leaves the company? Could you fine tune an LLM on their conversations with customers and then do inference where it would write text in the style of your star sales person.

We are also adding function calling so the model would know to reach out to an external API to fetch some data before generating a response.

disclaimer: I work on Helix


I really don’t get this sentiment - why not do both?

Retrieval allows looking up facts - eg in a Google search

Finetuning allows reasoning using new knowledge.

Humans do both.


I think fine tuning makes sense when you need some domain specific knowledge to properly read, analyze, and interpret the information you're passing to it. But its not an information store itself.

The most valuable skill an LLM can have is good reasoning skills and a broad enough knowledge base to understand. From there you can pass it the important bits it needs.


I think we are saying the same thing.

The key ingredients are:

reasoning(skills) + knowledge + important bits/facts

The best systems have all of these




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: