AIUI, Discord's issue would largely not arise in current Go code. In fact if I remember correctly Go had a near-fix to the issue in flight in beta by the time they finished their stuff, but having had a rewrite in hand by then there was no reason to go back. (Were it half in hand, one might have to discuss sunk cost fallacy, but when it was entirely in hand and half-deployed, going back just costs more.)
However, even though Discord's article is technically out of date in terms of their exact numbers, the principle still holds, just at larger scales. If one keeps scaling up, eventually one will encounter fairly fundamental and difficult problems that take odd solutions, and no fully automated memory solution will solve them.
I would observe, though, that these complaints are arising at a very significant scale. It is a common error in programmers to assess their needs as if they are going to be writing code running on a hundred servers maxed out on the resources at near 100%-CPU when in reality their code is going to comfortably run on one instance with 5% of one CPU in a day.
I say without hesitation that if someone is looking to run dozens of maxed-out servers, Go is a bad choice and it is a mistake to even start writing that code in Go. (There's many even worse choices; if Uber was trying to write the same service in Python or something... yeowch.) But if someone rejects Go because it can't hit that use case, but the use case couldn't possibly hit that scale unless every person on the planet become a customer five times over, that's making the exact same mistake. Go is a good solution for many very common use cases, but it's not that hard to do some Feynman estimations at the start of a project and notice that it's getting kind of close to the comfortable limits for Go.
(Even growth isn't really an excuse. Resources are so abundant that you should take a log-based view, or an exponential-based view if you prefer. I like to have an order-of-magnitude buffer minimum in my design for the largest possible scale I could face, and most of the time that's pretty practical nowadays. If I have a case where Go would work, but I'd only really have roughly a factor of 2x growth before it would become a problem, I wouldn't use it. It's too easy to consume that by either usage growth, or future changes in what the system needs to do, or error in the Feynman estimation. But resources are, as I said, so abundant that by the time I'm maxing out a 32-core or 64-core system with however much RAM that comes with nowadays, I'm running a lot of stuff.)
I would be curious if they've got a "rewrite in Rust" effort going. Wouldn't be surprised to see it cut the CPUs yet again by half or thirds. Depends on how big & complicated the service in question is.
I guess you could even use that as a metric... if someone come up to you and said "I've got a magic button that if I push it will cut your code's CPU usage in half. How much will you pay me to push it?" and if the answer is a non-committal shrug, Go's a fine choice. I have about a dozen Go services and I'd pay you about a buck to push that button, because they're already way more efficient than I need. Uber would clearly pay quite a bit.