Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

'Compilers' should be incremental (and iterative), and the asymptotics are way more important than the constant factors; if the conceptual models are not adequate to effectively express very fine-grain incrementality, no amount of bitpacking is going to save you. https://arxiv.org/pdf/2104.01270.pdf as an example—this is not perfect, but very serviceable.

Being able to serialise is good and valuable and useful, and enables some very interesting things, but you can still do a lot of useful things with just a persistent process working on in-memory structures.




Thx for link.

Asymptotics do always win but consider but consider the price of the win i.e. bitpacking is cheap (and fun) — not even just an academic exercise, this is where you get to use your hard won intuition.


Not saying it's not worth it, just, it's strange to put them in the same sentence when one is what makes the difference between usable and completely unusable. I lived with 20-60s dirty build times for 1-line changes at symmetry, solely because of redundant compile-time recomputation—I wouldn't wish that on anybody. 10x would have been on the upper edge of tolerable, but you certainly won't get 10x for 'cheap'—and even so, latency should rightfully be measured in the milliseconds, not seconds. (~10x, incidentally, is quoted as an upper bound for newctfe here https://forum.dlang.org/post/qxiggjwhvadbpdfkidvu@forum.dlan..., had it ever materialised.)


newCTFE was actually hitting 50x on some things I benchmarked — locality again, to a good approximation.

And my initial post did mention doing too much work in the first place.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: