Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The answer is that for a huge variety of software, performance is not important, or perhaps is only important for a subset of the application.

My personal experience is that the dynamic languages you've laid out generally have frameworks that are extremely conducive to rapid prototyping (Django is my favorite). I've seen and done the dance many times -- start with a Django/Rails/Laravel app, get a free admin and build up some CRUD pages in no time flat, and then once you've got enough traffic to care, move parts of the application to more performant platforms (Go/JVM usually) as necessary.




Yeah, plus even if performance is important, the app layer isn't necessarily the best place to optimize. It doesn't really matter how fast you sprint between database calls if the database and its IO dominate your site's performance profile, which they often do...


Everyone seems to say this and also to write really slow websites.


Are these sites slow because the software on the server-side is slow or because they're JavaScript bloated garbage?

That's a serious question: I find it's often hard to tell what the bottleneck might be in these applications.


The vast majority of slow websites I've seen written with RoR were slow because the DB later want optimised. 1+n query problems, pulling way more data than needed and then processing it in Ruby, missing indices etc.


Either that or by slow views. URL generation is often a noticable culprit.


Most websites are written to minimize developer time not processing time.


They say this and write slow websites because they don't care. They only care about their code being "beautiful" in some weird sense.


If that's really the case, why do people spawn multiple instances of their app?

A python application can be anywhere from 10x to 50x slower than a native application. It also probably consumes at least 5x more memory.

Writing the same app in a compiled language is not even an optimization. It's just baseline work to ensure the code is not super slow.

Like, if you know you will sort a list of 10 items, choosing quicksort from the stdlib instead of bubble sort is not even an optimization. It's just common sense.


> why do people spawn multiple instances of their app

For concurrency (number of requests handled at once) instead of speed (end-to-end time of of a single request)


This is not true. Java and Golang can use asynchronous IO and maintain thousands of concurrent connections. It's just another case where slow languages are... Slow


If that was the only reason (which it is not), it would still be a very good reason to stop writing code in these languages.

Why waste 10x more memory?


> Why waste 10x more memory?

That sort of question is totally missing the point of why people use these languages, yeah? Languages in the web world don't tend to be chosen based on memory requirements (or speed as this suggests). Are there cases where you want to think about that? Sure.

People have plenty of reasons they'd want to use Python over Go, and vice versa.


The waste in memory is just an additional negative point.

"and vice versa" < Sorry, but no. There's no equivalency here.

The only reason I would ever use python is for small scripts that I only run on my machine and don't need to deploy anywhere.

Maybe 10 years ago Python was an attractive language because Java sucked and C# only runs on windows and there weren't many other good choices.

Now there are many expressive languages that are also statically typed and fast. D, Swift, Kotlin, etc.


The ecosystem is a much bigger deal. There's an officially supported python library for every SaaS product on the market, and many libraries that are best-in-class in areas like data science. It takes minutes to write to pdfs, make graphical charts, edit images, and a million other nuancy, minor parts of apps that you want, but don't want to spend a ton of time writing.

Java is the only static language that features roughly equivalent levels of support ecosystem wide.


Forking 10 processes does not use 10x the memory of a single process starting 10 threads. It's actually almost identical. Both are implemented by the kernel using clone(). Many older tools written in "fast" languages like PostgreSQL and Apache also use forking.


We're not talking about forking here. Python/Ruby apps are actually spawned as several separate processes.


Not for almost a decade. Ruby web servers and job processing frameworks have used forking out of the box since the release of Phusion Passanger 2 in 2008 and Resque in 2009.


This just isn't true on any decently designed system I've seen. Practically any database can manage 5k complex-ish queries per second. For the common simple queries closer to 50k.

Good luck getting more than 100 calls per second out of the slow languages


If you start getting into decently designed systems territory, you're still going to have trouble beating some of the stuff that comes out of the Python/Lisp/Node communities. For instance: https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-pytho...

Anyway, if you're locally looping for hundreds/thousands of queries to the database, instead of writing one query or calling a stored procedure, you're probably doing things wrong.


I'm talking about API calls where you make a few database queries, not local or batched.

Java/Go with pretty much any database can manage about 50,000 individual queries+REST calls a second.


Ah I see. I still think if you're measuring that way, there's no reason languages like Python et al. can't accomplish similar too. And if they can't, well, there's always adding more machines. Horizontal scaling tends to occur no matter what you're using, so is the argument just that with less productive (probably a matter of taste for many at this point) languages you'll have to scale out later? That's a tradeoff to consider, but there are other tradeoffs too.

https://www.techempower.com/benchmarks/#section=data-r14&hw=... has some interesting benchmarks, apparently we should be using Dart or C++. Go does slightly better than JS but not a lot. Some newer Python frameworks aren't on there yet. None of them reach near 50k, but I don't know the details of the benchmark and they aren't all using the same DB. Certainly you can get crazy numbers depending on what you're testing. e.g. https://github.com/squeaky-pl/japronto gets one million requests per second through async and pipelining but those requests aren't doing much.


True, horizontal scaling will always save you no matter how slow the front end is. Cost becomes significant at a certain level too though. For example, Google estimates each search query goes to over a thousand machines. If you need 100x 1000 machines to serve a query because the back end is PHP it adds up.

And you can make Python or even PHP fast if you try hard enough.

My argument is that the engineer overhead for Go and new breeds of Java frameworks are small enough that it makes no sense to use anything else if you're planning on scaling for real.

If you start with something else the cost of making a slow language fast and the multiples of extra machines you need costs far more than just using the faster language to start with

For the benchmark you posted, take a good look at the "realistic" vs "stripped" implementations and whether the test used an ORM. You'll quickly see that the realistic test applications with any kind of ORM are exclusively C#, Java, Go, and C++


And then you end up with slow applications or websites. Fast response times under high load can pay off and easily recoup longer development times. And it's much harder to change the system once you're successful.

Cost difference is another topic. While servers can be cheaper than developers, saving 90% of server cost can definitely clear some budget for enhancements.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: