I agree, but there is a sort of beauty in programs that were written for absurdly slow hardware.
There was a thing on HN like seven years ago [1] that talked about how command line tools can be many times faster than Hadoop; the streams and pipelines are just so ridiculously optimized.
Obviously you're not going to replace all your Hadoop clusters with just Bash and netcat, and I'm sure there are many cases where Hadoop absolutely outperforms something hobbled together with a Bash script, but I still think it serves a purpose: because these tools were written for such tiny amounts of RAM and crappy CPUs, they perform cartoonishly fast on modern computers.
I don't like coding like it's 1995 either, and I really don't write code like that anymore; most of the stuff I write nowadays can happily assume several gigs of memory and many CPUs, but I still respect people that can squeeze every bit of juice out of a single thread and no memory.
Single threading always makes it run slower though.
Also lots of 1995 assumptions lead to outrageously slow software if used today. Python in 1995 was only marginally slower than C++. It's orders of magnitude slower today.
Yeah that’s not true. In fact I would say it’s almost the opposite; most things (other than IO) will go slower if you just throw extra threads at it.
There’s overhead with thread creation, locks can introduce a lot of contention and waiting and context switches, coordination between threads has a non-zero cost, and the list goes on.
Well-optimized multithreaded code will often be faster but thats harder than it sounds, and it’s certainly not the case that “single threading always makes it run slower”.