They didn't use good, standard practices for handling the exception. It's an example of them not taking their own advice. Advice I'm sure you could find on the website, if it wasn't down.
Wouldn't a decent load-balancer/proxy server (e.g.: haproxy) in front of the application servers be a good idea to redirect traffic to a graceful "Ooops" page/server when something like this happens?
The error message could represent defective hardware. If we did our job right, that's about all it would ever indicate. There's few sane actions one could take in an exception handler to remediate the problem. I will agree that a better overall system design could detect this problem and rotate the defective nodes, though.
But in this case for all we know it could be a software defect from which there is no recovery. IMO it's bad to overdesign recovery mechanisms that just end up masking design errors. Clever staged rollouts of changes are good ways to mitigate the impact of a new regression.
Hardware failure is one thing but in my experience it's just devs not taking the time to properly handle errors and exceptions. If something fails, restart the process and report the error silently in a log, not facing the customer.
The error is coming from IIS. There are several additional layers which could go down inc. CGI, Application Pool(s), Database(s), parts of the LAN (interconnecting different servers/services), or anything else the ASP.NET site relies upon (e.g. handle depletion, disk space, etc).
In IIS this is a "last resort" error when even the custom error handler cannot respond.