Hacker Newsnew | past | comments | ask | show | jobs | submit | ani17's commentslogin

Author here. I wanted to understand what vLLM and llama.cpp are actually doing under the hood, but the codebases are massive. So I wrote a stripped down version from scratch to see the core ideas without the production complexity.

Code: https://github.com/Anirudh171202/WhiteLotus


Author here. A bit more context: By day I'm a systems engineer building AI networking infrastructure. So I kept ending up in conversations where I'm not exactly able to wrap my head on the latest inference magic trick.

Like when someone mentioned vLLM's paged attention, I knew virtual memory paging, but had no idea someone had applied the same idea to KV cache allocation on GPUs.

Github link to the project: https://github.com/Anirudh171202/WhiteLotus


The blog walks through why your first token is always the slowest, why output tokens cost 5x more, and how stuff like speculative decoding and chunked prefill actually work, from the perspective of a systems engineer!


Definitely an alternative solution. For the purpose of this script, I wouldn't prefer that though.


It's insane if the data is accurate. Only time will tell


You forgot "Middle Out" by Pied Piper!


thanks for sharing!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: