Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue isn't really refactoring.

This is kind of like a blind man clapping to echolocate his way around a maze. It's better than nothing but the issue is that the compiler just isn't amenable to non-batch work (e.g. the semantic analysis is all or nothing, it uses WAY too much memory[0], and it can't serialize it's state to the disk).

You could already use the frontend as a library, almost no one does. It's not because the API.

[0] SDC can pack a lot of types into a few bytes. This is the kind of thing real refactoring allows — this change (packing types, to be specific) is something I would happily chip in and help out with.

Memory layout and locality is where performance lies in a compiler (that and doing less work).



'Compilers' should be incremental (and iterative), and the asymptotics are way more important than the constant factors; if the conceptual models are not adequate to effectively express very fine-grain incrementality, no amount of bitpacking is going to save you. https://arxiv.org/pdf/2104.01270.pdf as an example—this is not perfect, but very serviceable.

Being able to serialise is good and valuable and useful, and enables some very interesting things, but you can still do a lot of useful things with just a persistent process working on in-memory structures.


Thx for link.

Asymptotics do always win but consider but consider the price of the win i.e. bitpacking is cheap (and fun) — not even just an academic exercise, this is where you get to use your hard won intuition.


Not saying it's not worth it, just, it's strange to put them in the same sentence when one is what makes the difference between usable and completely unusable. I lived with 20-60s dirty build times for 1-line changes at symmetry, solely because of redundant compile-time recomputation—I wouldn't wish that on anybody. 10x would have been on the upper edge of tolerable, but you certainly won't get 10x for 'cheap'—and even so, latency should rightfully be measured in the milliseconds, not seconds. (~10x, incidentally, is quoted as an upper bound for newctfe here https://forum.dlang.org/post/qxiggjwhvadbpdfkidvu@forum.dlan..., had it ever materialised.)


newCTFE was actually hitting 50x on some things I benchmarked — locality again, to a good approximation.

And my initial post did mention doing too much work in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: