Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ipsum2
on Feb 8, 2024
|
parent
|
context
|
favorite
| on:
How we got fine-tuning Mistral-7B to not suck
The tl;dr seems to be: Tell a LLM to create pairs of questions and answers based off of a document, and fine-tune on that data. Does the model answer questions from the article that weren't generated in advance?
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: