That sort of question is totally missing the point of why people use these languages, yeah? Languages in the web world don't tend to be chosen based on memory requirements (or speed as this suggests). Are there cases where you want to think about that? Sure.
People have plenty of reasons they'd want to use Python over Go, and vice versa.
The ecosystem is a much bigger deal. There's an officially supported python library for every SaaS product on the market, and many libraries that are best-in-class in areas like data science. It takes minutes to write to pdfs, make graphical charts, edit images, and a million other nuancy, minor parts of apps that you want, but don't want to spend a ton of time writing.
Java is the only static language that features roughly equivalent levels of support ecosystem wide.
Forking 10 processes does not use 10x the memory of a single process starting 10 threads. It's actually almost identical. Both are implemented by the kernel using clone(). Many older tools written in "fast" languages like PostgreSQL and Apache also use forking.
Not for almost a decade. Ruby web servers and job processing frameworks have used forking out of the box since the release of Phusion Passanger 2 in 2008 and Resque in 2009.
Why waste 10x more memory?