Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can already use an LLM to train a smaller, more efficient LLM without significant loss in results.


Do you mean the output of a LLM as the training data for the new model? What is the specification for the prompts that generate the training data?

Any links with more info?


There were just an article submitted few days ago about Alpaca, a LLM trained on GPT prompts: https://news.ycombinator.com/item?id=35136624


Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: