Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like people let go of perf too easily.

When using something like Golang, I have apps doing normal CRUD-ish queries at 10k QPS, on 32c/64g machines. For most web apps, 10k QPS is much more than they will ever see, and the fact that it is all done in a single process means you could do really cool things with in-memory datastructures.

Instead, every single web app is written as a distributed system, when almost none of them need to be, if they were written on a platform that didn't eat all of their resources.



I could rephrase you comment as why would anyone use Go when I could just use assembler or C and keep all into a single node.

People don't use python because they want performance. People use python because of productivity, frameworks, libraries, documentation, resources and ecosystems. Most projects don't even need 10k qps, but instead most projects do need an ORM, a migrations system, authentication, sessions, etc. Python has bottle tested tools and frameworks for this.


People have been taught to be irrationally afraid of in-process concurrency (including async). Not too long ago the standard approach for concurrency was "it's hard, don't do it".

I've been told off in code review for using Python's concurrent.futures.ThreadPoolExecutor to run some http requests (making the code finish N times faster, in a context where latency mattered) "because it's hard to reason about".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: