Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This YouTube video does a great job illustrating how well HTTP/2 works in practice.

https://www.youtube.com/watch?v=QCEid2WCszM

A lesser known *ownside to HTTP/2 over TCP solution was actually caused by one of the improvements - a single reusable (multiplexed) connection - that could end up stalled or blocked due to network issues. This behavior could go unnoticed over the legacy HTTP/1.1 connections due to browsers opening a hugh number of connections (~20) to a host, so when one would fail it wouldn't block everything.




It's an unrealistic test as pages aren't really made up of tiles of images

Generally images are the lowest priority download, so ensuring higher priority items get downloaded first is important and not all H2 implementations do it well

https://ishttp2fastyet.com


Its only unrealistic because so much tooling was built to avoid sending multiple files. JS tools for bundling every js file in to one, sprite sheets, using multiple domain names to get more concurrent connections. With HTTP2 we could dump so much of this.


To me it's unrealistic in the sense that it's an artificial test

Images are the lowest priority resource on the page and apart from the visual appearance aspects there are no dependencies on which order they're fetched in.

Most other resources on a page have greater side effects and and dependencies e.g. sync JS blocking the HTML parser, sync and deferred JS need to be executed in order etc.

You can saturate a last mile connection with image downloads in a way you man not be able to with other resources due to effect of the browser processing those resources.


Similar arguments apply to the use of ipv6 over v4, to Linux over windows, to RISC over X86, to anything over javascript and to countless other “better” solution to problems that don’t get fully adopted because the old stuff continues to work decently enough.


Priority doesn't help. A stalled tcp connection blocks everything on it.


Ironic considering YouTube itself is pretty heavily so.


My team ran into a surprising behavior for HTTP2. Browsers decide whether to reuse a connection not based on the original domain the connection was made to, browsers base the decision on the domains that the returned certificate is signed for!

Our current load balancer doesn't support HTTP2 end-to-end (and we are dong gRPC), so we are load balancing TCP connections to the individual instances. And for certificates, we would use SANs to reduce the number of certificates being requested.

Put those two together, and browsers will assume that the first connection they make to serviceA.example.com can also be used for serviceB.example.com. Oops!

TLDR, certificates for HTTP2 need to be unique to each endpoint that terminates a browser connection.


oh man, you basically are my hero. I just found out https://www.trullala.de/firefox-http2-ipv6-pitfall/ which basically describes the problem.

basically we have a gitlab instance at IP .8 / external at .210 and a nexus instances at .210 internally/externally however we have IPv6 addresses pointed at gitlab.

In firefox sometimes you could end in the wrong location and I had no idea why that was happening. it just failed. And only in firefox.

(btw. the behavior of firefox is just stupid. https://bugzilla.mozilla.org/show_bug.cgi?id=1190136)


SNI spec says:

If the server_name is established in the TLS session handshake, the client SHOULD NOT attempt to request a different server name at the application layer.

Yeah, looks like the browsers are allowed to do that, although it's not recommended.


The HTTP2 spec actually allows it:

Connections that are made to an origin server, either directly or through a tunnel created using the CONNECT method (Section 8.3), MAY be reused for requests with multiple different URI authority components. A connection can be reused as long as the origin server is authoritative (Section 10.1).

For https resources, connection reuse additionally depends on having a certificate that is valid for the host in the URI. The certificate presented by the server MUST satisfy any checks that the client would perform when forming a new TLS connection for the host in the URI.

There is a way to respond with an error code when a server receives a request for the wrong domain, but it seems like a bad idea to depend on it (because it means lots of failed requests that could be avoided with better certificate management):

A server that does not wish clients to reuse connections can indicate that it is not authoritative for a request by sending a 421 (Misdirected Request) status code in response to the request (see Section 9.1.2).

https://http2.github.io/http2-spec/#rfc.section.9.1.1


Interesting.

I think you want something like Envoy in the middle to actually terminate TLS (you can still use TLS to talk to the backends, of course) and do the load balancing. Having one backend handle one HTTP/2 connection does not necessarily balance load, anyway. With Envoy in the middle, you can again load-balance requests as though they were independent connections; if a browser has an HTTP/2 connection open that wants 10 expensive things, Envoy can use 10 backends to handle that request, rather than sending 10 expensive requests to 1 backend.

The statistics and traces you get for free are also very worthwhile, not to mention more advanced things like automatic retries, canarying, outlier detection, circuit breaking, etc.


Definitely, I was simplifying a little. For that project we actually were using separate Envoy deployments per team/service. Also needed for grpc-web support, and has a json to grpc transcoder.

We could have bypassed this by having a single pool of shared Envoy instances for ingress, but at the time we wanted to avoid the complexity of multiple teams managing a single Envoy configuration. In the next year we'll hopefully switch to Istio, which will help with the multi-tenant configuration management.


Ah, thanks for sharing that story. That sounds like my environment; we started using Envoy for grpc-web... now it's in front of everything. Except, of course, it's not the main ingress controller because we had so many legacy nginx rules that we decided it was easier to use nginx as the proxy that's in front of everything. I regret it ;)


The downside, as you point out disproportionately affects mobile and high latency connections.

any kind of packet loss kneecaps the performance of the whole thing, unlike http1.1.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: