The optimization proposed was whatever HTTP/2 does:
> "part of the idea with HTTP2 that the server can see everything you're asking for and more intelligently prioritize"
HTTP/2 prioritisation does in fact optimize continuously by starting lower priority streams when higher priority data isn't yet available, then pausing streams in mid-send as other higher priority streams' data becomes available.
> "if the most important response takes longer to generate than lower priority responses, the server may end up starting to send data for a lower priority response and then interrupt its stream when the higher priority response becomes available"
> "the problem with large send buffers is that it limits the nimbleness of the server to adjust the data it is sending on a connection as high priority responses become available. Once the response data has been written into the TCP send buffer it is beyond the server’s control and has been committed to be delivered in the order it is written"
You proposed to replace this with HTTP/1.1 and multiple TCPs, which does not provide an equivalent optimization.
Browsers do in fact optimize HTTP/1.1 per asset by keeping a list of requests in priority order and running a limited number of TCP connections in parallel, but if they can use HTTP/2 that usually works out faster. On the internet for reasons described in the Cloudflare article, and also because giving as many requests to the server as possible up front allows the HTTP server to start fetching or generating lower priority assets sooner, hiding some backend latency - especially significant with load balancers, other reverse proxies, or HDD cold storage.