Hacker News new | past | comments | ask | show | jobs | submit login

If you are working on software that will run on customers' machines tomorrow, it makes sense to do at least a good chunk of development on very average hardware. These days, most client development is web stuff, so you don't often have the excuse of compile times.

But if you're working on software that's supposed to run well on high-spec machines and servers, or targeting machines a few years in the future, then you're better off with a higher spec machine; one with lots of RAM and CPU cores, so you can play around with the various tradeoffs of time vs memory vs parallelism. Or if you have a big source tree - the one I have is perhaps 10GB in size, and takes about 14 minutes to build today - then it makes lots of sense to reduce turnaround time by throwing hardware at it.

For example, I work on a compiler that is used by the build tree. I can't really be sure the compiler is "good" unless it builds the whole tree, and the tree's tests run; if I checked it in as is, the integration server could find the problem, and then I'd be in everybody's bad books. Reducing the build time by 5 minutes, iterated over perhaps 5 or 10 builds in a day, and it starts adding up to non-trivial productivity advantages.




Maybe I'm being a closed-minded idiot, but "i write compilers for a living" isn't something a lot of programmers can claim. I guess I'm working on the assumption that most programmers are building enterprise apps. I don't know why I think that.


I'm working on backend systems for a small startup, and I am writing software that on my work laptop (MacBook Pro, Core 2 Duo at 2.4 Ghz, with 4 GB of ram) takes about 30 seconds to compile. Tests take another minute or so to run. That is 1 minute and 30 seconds too long because my attention is now elsewhere ...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: