Even in the early 2000s CGI was really old tech. People were moving to more modern systems that solved the problems with CGI. What's funny is, modern systems actually have exactly the same problems as some of those newer solutions, yet they're just ignored/accepted now?
In the mid-2000s I worked on a very-large-scale website using Apache2 w/mod_perl. Our high-traffic peaks were something like 25k RPS (for dynamic content; total RPS was >250k). Even at that time it was a bit old hat, but the design scaled very well. You'd have a fleet of mod_perl servers that would handle dynamic content requests, and a fleet of Apache2 servers that served static content and reverse-proxied back to the mod_perl fleet for dynamic requests. In front of the static servers were load balancers. They'd all keep connection pools open and the load balancers avoided the "maximum connection limit" of typical TCP/IP software, so there was no real connection limit, it was just network, memory, and cpu limits.
The big benefit of Apache2 w/mod_perl or mod_php was that you combined the pluggability and features of a scalable and feature-filled web server with the resident memory and cache of an interpreter that didn't need to keep exiting and starting. Yes you had to do more work to integrate with it, but you have to do that today with any framework.
The big downside was bugs. If you had a bug, you might have to debug both Apache and your application at the same time. There was not as much memory to be had, so memory leaks were a MUCH bigger problem than they are today. We worked around it with stupid fixes like stopping interpreters after taking 1000 requests or something. The high-level programmers (Perl, PHP) didn't really know C or systems programming so they didn't really know how to debug Apache or the larger OS problems, which it turns out has not changed in 20 years...
FastCGI and later systems had the benefit that you could run the same architecture without being tied to a webserver and dealing with its bugs on top of your own. But it also had the downside of (in some cases) not multiplexing connections, and you didn't get tight integration with the web server so that made some things more difficult.
Ultimately every backend web technology is just a rehashing of CGI, in a format incompatible with everything else. There were technical reasons why things like FastCGI, WSGI, etc exist, but today they are unnecessary now that we have HTTP/2 and HTTP/3. If you can multiplex HTTP connections and serve HTTP responses, you don't need anything else. I really hope future devs will stop reinventing the wheel and go back to actual standards that work outside your own single application/language/framework.
In the mid-2000s I worked on a very-large-scale website using Apache2 w/mod_perl. Our high-traffic peaks were something like 25k RPS (for dynamic content; total RPS was >250k). Even at that time it was a bit old hat, but the design scaled very well. You'd have a fleet of mod_perl servers that would handle dynamic content requests, and a fleet of Apache2 servers that served static content and reverse-proxied back to the mod_perl fleet for dynamic requests. In front of the static servers were load balancers. They'd all keep connection pools open and the load balancers avoided the "maximum connection limit" of typical TCP/IP software, so there was no real connection limit, it was just network, memory, and cpu limits.
The big benefit of Apache2 w/mod_perl or mod_php was that you combined the pluggability and features of a scalable and feature-filled web server with the resident memory and cache of an interpreter that didn't need to keep exiting and starting. Yes you had to do more work to integrate with it, but you have to do that today with any framework.
The big downside was bugs. If you had a bug, you might have to debug both Apache and your application at the same time. There was not as much memory to be had, so memory leaks were a MUCH bigger problem than they are today. We worked around it with stupid fixes like stopping interpreters after taking 1000 requests or something. The high-level programmers (Perl, PHP) didn't really know C or systems programming so they didn't really know how to debug Apache or the larger OS problems, which it turns out has not changed in 20 years...
FastCGI and later systems had the benefit that you could run the same architecture without being tied to a webserver and dealing with its bugs on top of your own. But it also had the downside of (in some cases) not multiplexing connections, and you didn't get tight integration with the web server so that made some things more difficult.
Ultimately every backend web technology is just a rehashing of CGI, in a format incompatible with everything else. There were technical reasons why things like FastCGI, WSGI, etc exist, but today they are unnecessary now that we have HTTP/2 and HTTP/3. If you can multiplex HTTP connections and serve HTTP responses, you don't need anything else. I really hope future devs will stop reinventing the wheel and go back to actual standards that work outside your own single application/language/framework.