The answer is that for a huge variety of software, performance is not important, or perhaps is only important for a subset of the application.
My personal experience is that the dynamic languages you've laid out generally have frameworks that are extremely conducive to rapid prototyping (Django is my favorite). I've seen and done the dance many times -- start with a Django/Rails/Laravel app, get a free admin and build up some CRUD pages in no time flat, and then once you've got enough traffic to care, move parts of the application to more performant platforms (Go/JVM usually) as necessary.
Yeah, plus even if performance is important, the app layer isn't necessarily the best place to optimize. It doesn't really matter how fast you sprint between database calls if the database and its IO dominate your site's performance profile, which they often do...
The vast majority of slow websites I've seen written with RoR were slow because the DB later want optimised. 1+n query problems, pulling way more data than needed and then processing it in Ruby, missing indices etc.
If that's really the case, why do people spawn multiple instances of their app?
A python application can be anywhere from 10x to 50x slower than a native application. It also probably consumes at least 5x more memory.
Writing the same app in a compiled language is not even an optimization. It's just baseline work to ensure the code is not super slow.
Like, if you know you will sort a list of 10 items, choosing quicksort from the stdlib instead of bubble sort is not even an optimization. It's just common sense.
This is not true. Java and Golang can use asynchronous IO and maintain thousands of concurrent connections. It's just another case where slow languages are... Slow
That sort of question is totally missing the point of why people use these languages, yeah? Languages in the web world don't tend to be chosen based on memory requirements (or speed as this suggests). Are there cases where you want to think about that? Sure.
People have plenty of reasons they'd want to use Python over Go, and vice versa.
The ecosystem is a much bigger deal. There's an officially supported python library for every SaaS product on the market, and many libraries that are best-in-class in areas like data science. It takes minutes to write to pdfs, make graphical charts, edit images, and a million other nuancy, minor parts of apps that you want, but don't want to spend a ton of time writing.
Java is the only static language that features roughly equivalent levels of support ecosystem wide.
Forking 10 processes does not use 10x the memory of a single process starting 10 threads. It's actually almost identical. Both are implemented by the kernel using clone(). Many older tools written in "fast" languages like PostgreSQL and Apache also use forking.
Not for almost a decade. Ruby web servers and job processing frameworks have used forking out of the box since the release of Phusion Passanger 2 in 2008 and Resque in 2009.
This just isn't true on any decently designed system I've seen. Practically any database can manage 5k complex-ish queries per second. For the common simple queries closer to 50k.
Good luck getting more than 100 calls per second out of the slow languages
If you start getting into decently designed systems territory, you're still going to have trouble beating some of the stuff that comes out of the Python/Lisp/Node communities. For instance: https://magic.io/blog/asyncpg-1m-rows-from-postgres-to-pytho...
Anyway, if you're locally looping for hundreds/thousands of queries to the database, instead of writing one query or calling a stored procedure, you're probably doing things wrong.
Ah I see. I still think if you're measuring that way, there's no reason languages like Python et al. can't accomplish similar too. And if they can't, well, there's always adding more machines. Horizontal scaling tends to occur no matter what you're using, so is the argument just that with less productive (probably a matter of taste for many at this point) languages you'll have to scale out later? That's a tradeoff to consider, but there are other tradeoffs too.
https://www.techempower.com/benchmarks/#section=data-r14&hw=... has some interesting benchmarks, apparently we should be using Dart or C++. Go does slightly better than JS but not a lot. Some newer Python frameworks aren't on there yet. None of them reach near 50k, but I don't know the details of the benchmark and they aren't all using the same DB. Certainly you can get crazy numbers depending on what you're testing. e.g. https://github.com/squeaky-pl/japronto gets one million requests per second through async and pipelining but those requests aren't doing much.
True, horizontal scaling will always save you no matter how slow the front end is. Cost becomes significant at a certain level too though. For example, Google estimates each search query goes to over a thousand machines. If you need 100x 1000 machines to serve a query because the back end is PHP it adds up.
And you can make Python or even PHP fast if you try hard enough.
My argument is that the engineer overhead for Go and new breeds of Java frameworks are small enough that it makes no sense to use anything else if you're planning on scaling for real.
If you start with something else the cost of making a slow language fast and the multiples of extra machines you need costs far more than just using the faster language to start with
For the benchmark you posted, take a good look at the "realistic" vs "stripped" implementations and whether the test used an ORM. You'll quickly see that the realistic test applications with any kind of ORM are exclusively C#, Java, Go, and C++
And then you end up with slow applications or websites. Fast response times under high load can pay off and easily recoup longer development times. And it's much harder to change the system once you're successful.
Cost difference is another topic. While servers can be cheaper than developers, saving 90% of server cost can definitely clear some budget for enhancements.
My personal experience is that the dynamic languages you've laid out generally have frameworks that are extremely conducive to rapid prototyping (Django is my favorite). I've seen and done the dance many times -- start with a Django/Rails/Laravel app, get a free admin and build up some CRUD pages in no time flat, and then once you've got enough traffic to care, move parts of the application to more performant platforms (Go/JVM usually) as necessary.