Not necessarily ... you would have to issue a HTTP request per uplink chunk but HTTP can use connection pooling so that does not necessarily translate to a single TCP connection [0]
Agreed it's not quite as straightforward as the parent poster suggests. I can see issues with this approach for realtime/streaming applications but for applications relying on a similar request/response flow of traffic it would do the job.
Yes, and for various reasons those chunks can be reordered. And a misbehaving proxy might also try to cache them. And to send data back from the server you need long hanging gets, which can also be subject to timeouts and weird chunking by misbehaving proxies.
These problems are all solvable, but you need to treat http requests like datagrams and (basically) reimplement TCP on top of HTTP.
We've done this several times now. I made one[1] myself a few years ago based on google's browserchannel implementation (that was first written for gchat inside gmail. It supported browsers down to IE5.5). But the best is probably SockJS - https://github.com/sockjs . IIRC Its written by some (ex?) vmware guys, and its great.
But all this stuff is pretty outdated now. Misbehaving corporate proxies are (thankfully) getting much rarer - especially if you tunnel your traffic over HTTPS.
These days you should just use websockets directly.
Agreed it's not quite as straightforward as the parent poster suggests. I can see issues with this approach for realtime/streaming applications but for applications relying on a similar request/response flow of traffic it would do the job.
[0] https://en.wikipedia.org/wiki/HTTP_persistent_connection