Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I meant something more along the line of "if your code is ill-optimized, you gotta throw more CPU time at it".


Well, do you think that works?

When you do that, what's going to happen is the developers are going to think they can get away with making it even more inefficient. Not intentionally, of course, but the signal you're sending them is "it's fine if it's slow, we'll throw more compute at it" and that's the condition they will be solving around.

So the next time they choose between flashy feature and fixing a performance problem, they will create the flashy feature. "We can let the hardware acquisition people deal with performance."

It might even look like it works, at first. You get features out quickly and the thing runs almost acceptably.

But then months or years later, you're spending a shitton of compute on performing something that could be done with a couple of pizza boxes. And everyone is saying "well it's too late to change now, everything is designed around a HPC assumption."

And now you're stuck bleeding money on things you shouldn't have needed.

If I sound sore it's because I have lived through it too many times.

You should never, ever solve the immediate problem without looking at what long-term reinforcing feedback loops you're setting up in the process.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: