Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Speculative sampling to the rescue - you decode locally with a smaller-LLM, and only check from time to time with a large model, like every few tokens. This guarantees exactly the same quality with a big speedup, as you don't need to predict with the large model each individual token.

Accelerating Large Language Model Decoding with Speculative Sampling https://arxiv.org/abs/2302.01318




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: