The oft-cited reason is each call fires up a new process so is expensive in terms of resources (though resources are cheap these days). Additionally, the CGI program has to faff about to retrieve data and parameters from the request (easy to get wrong) plus you have to watch out for security issues (hard to get right). Plus, of course, since the process exits at the end of the request any kind of session tracking needs to implemented separately.
Having said, for inhouse use, embedded and/or trusted environments they allow super quick hacks, e.g. complete CGI 'program' to print the date (put it in cgi-bin and make it executable):-
#!/bin/sh
echo "Context-Type: text/plain\r\n"
echo "Your IP is $REMOTE_ADDR"
date
is there a benchmark that you are aware of that takes a cgi program and pits it against modern frameworks like node.js express , hono etc? just to get a feel of how many requests will be handled at scale
No, though there are many articles proving X is faster than Y along with articles proving the opposite.
The process overhead is likely to hurt CGI though, which is why FastCGI was developed. As covered elsewhere in the thread, having a fast front end with effective comms to a well-written backend seems a reasonably sweet spot.