More importantly, GC'ed languages tend to use at least 2x the memory of un-GC'ed languages and have to deal with the consequences of GC-induced pauses and generally inferior native code interop. Whether that matters to you or not depends on your application. No one is going to use a GC'ed language in the Linux Kernel, but practically 100% of backend applications are written in GC'ed languages because the productivity benefits are of automatic memory management are massive.
I’m not really sure if that 2x figure is accurate. I’ve seen charts on both sides of this and a lot here depends on your programming language and the things it can optimize: with Linear/Affine types, I’m fairly sure Haskell could, in theory, eliminate GC deterministically from the critical sections of your code-base without forcing you to adopt manual memory management universally.
But, there’s just the fact that people writing real-time/near real-time systems do, in fact, choose GC languages and make it work: video games are one example with Minecraft and Unity being the major examples. But also HFT systems: Jane Street heavily uses Ocaml and other companies use Java/etc. with specialized GCs.
This is not even to mention the microbenchmarks that seem to indicate that Common Lisp and Java can match or exceed Rust for tasks like implementing lock-free hash maps and various other things https://programming-language-benchmarks.vercel.app/problem/s...
I am aware that you can hit really good latency targets with GC'ed languages, like in the video game and finance industry. Whenever I investigate examples, though, I find the devs have to go through a ton of effort to avoid memory allocations, and then I ask if using the GC'ed language was even worth it in the first place?
I'm actually fascinated with the idea of going off-heap in the hotspots of GC'ed languages to get better performance. Netty, for instance, relies on off-heap allocations to achieve better networking performance. But, once you do so, you start incurring the disadvantages of languages like C/C++, and it can get complicated mixing the two styles of code.
"Whenever I investigate examples, though, I find the devs have to go through a ton of effort to avoid memory allocations"
Yep, also the median dev in a GC'ed language is simply incapable of writing super efficient code in these languages because they rarely have to. You would have to bring in the best of the best people from those communities or put your existing devs through a pretty significant education process that is similar in difficulty to just learning/using Rust.
The resulting code will be very different to what typical code looks like in those languages, so the supposed homogeneity benefits of just writing fast C#/Java when it's needed are probably not quite true. You'd basically have to keep that project staffed up with these kinds of people and ensure they have very good Prod observability to ensure regressions don't appear.
Yes, and I think one important aspect to this is the necessary CI/CD changes needed to support these kinds of optimizations. If your performance targets are tight enough that you are making significant non-standard optimizations in your GC'ed language, you're probably going to want some automated performance regression testing in your deployment pipeline to ensure you don't ship something that falls down under load. In my experience, building and maintaining those pipeline components is not easy.
Look at 2.cl, though: the lisp solution is faster than everything except one c++ solution. (And, aside from the SIMD intrinsics, the lisp solution is fairly idiomatic)
I mostly agree with what you're saying, but I'll also add that GC pauses are mostly a problem of yester-year unless you're either managing truly enormous amounts of memory or have hard real-time requirements (and even then it's debatable). Modern GCs, as seen in Go, Java 11+, .NET 4.5+ guarantee sub-millisecond pauses on terrabyte-large heaps (I believe the JS GC does as well, but I'm less sure).