Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You’re getting the narrative quite wrong here and assuming a great deal of bad faith that simply doesn’t exist.

People often seem to think Google presented SPDY and QUIC to IETF as faits accomplis, and they were just adopted as HTTP/2 and HTTP/3 because Google said so.

This is not how the standardisation processes work. Rather: (1) many involved parties recognised that something like this would be of value; (2) Google developed something, because they happened to be one of the parties that cared the most about it; (3) Google gave it to IETF; (4) all the relevant stakeholders joined in on improving it until there was consensus and practical experience that it really was worth it; and (5) it was finalised and published.

What ends up being standardised normally has some quite major differences from what was initially proposed. IETF QUIC is definitely quite different from gQUIC, as HTTP/2 is from SPDY. Standardisation within IETF brings diverse parties together to improve things based upon their experience and expertise. Certainly there will be dissenters, because there are trade-offs everywhere (e.g. the Varnish author was lukewarm about HTTP/2, reckoning it wasn’t worth it and that more radical changes should be made to HTTP semantics), but the end result will be better than what was initially presented, and there must be broad consensus that what is to be published is better than what preceded it (in this case HTTP/1.1). At Fastmail I observed a fair bit of the process of the standardisation of JMAP at IETF, and it benefited enormously from the process, changing shape quite significantly in some areas from what Fastmail initially presented.

The end result of HTTP/2 is definitely harder to implement than HTTP/1 (though it’s still not too bad—I implemented it in the draft days and didn’t have any real trouble, there was mostly just more to implement than with HTTP/1), but of its operational parameters, it’s better in every way than HTTP/1. Turns out that a protocol being plain-text really just isn’t useful, so long as the semantics are conveyed—literally the only people that need to care about the wire protocol are the people making tools that speak it (that is, HTTP libraries).

There’s really just one issue with HTTP/2: it makes it possible to use a single TCP connection where HTTP/1 probably used up to six, but this leads to TCP head-of-line blocking issues becoming more serious.

And make no mistake, TCP head-of-line blocking is a real issue on high-loss networks like the outskirts of wi-fi and cellular networks.

HTTP/2’s multiplexing solved real problems, and HTTP/3’s HOLB-fixing solves the last real problem with HTTP/2.

Google didn’t browbeat people into doing their will; rather, they presented a draft, and then everyone worked together to improve upon that draft, and they all (or at the least, almost all) agreed that the end result was a good improvement.

> a deliberate effort to deprive the web of its own nature by making it so complicated that only Google is capable to implement it.

… are you aware of how many HTTP/3 implementations there are already? In Rust alone, there are at least three fairly mature implementations: Quinn by various people, Quiche by Cloudflare, Neqo by Mozilla; as well as a handful more not-so-mature implementations.

Look, I’m not fond of Google, and I do think they abuse their position in many and various places, but this is not one of them.



The question is if on balance, http/2 and up really is worth the complexity when http/1.1 has served us well enough, with peak web traffic already behind us. Considering that we were able to somehow run the web (and messaging and mail and apps) 20-30 years ago with much, much less capable computers. It's almost as if computers have become way too powerful, so we had to invent atrocious web apps to offset any performance advances. Similarly, F/OSS was too ubiquitous so we had to invent "the cloud". I can't help to see anything other than a self-serving power end game in this, with a few monopolies grabbing everything left, and a legion of developers having a vested interest to keep the hamster wheel spinning.

Regarding the implementations of http/2 and up you speak of, I know only a single F/OSS one (ng-http2) actually used in server-side production.


> Regarding the implementations of http/2 and up you speak of, I know only a single F/OSS one (ng-http2) actually used in server-side production.

I've been using NginX HTTP/2 for years, and that's F/OSS. It doesn't use ng-http2.

Pretty sure this is widely used by others, including by Cloudflare who sponsored it.


Point taken, you're right re nginx


> peak web traffic already behind us

I absolutely don't see peak traffic being behind us. The internet is moving more and more bits on HTTP; videos' size is always ever increasing and is always distributed on HTTP, VR is coming, the default protocol for any Service is _always_ HTTP... we've definitely reached the maximum yet


> videos' size is always ever increasing and is always distributed on HTTP, VR is coming, the default protocol for any Service is _always_ HTTP...

Therein lies the problem. Do we really all have to shoulder youtube's (and porn networks) problems? Especially when yt always sends huge video streams even when you only want music?


> Therein lies the problem. Do we really all have to shoulder youtube's (and porn networks) problems?

No, you don't. Whether you're working on the client or server side, you can keep using HTTP/1.1, and the other side will downgrade to accommodate you. Meanwhile, those of us who want to optimally serve our users on both good and bad connections will just use the multiple, freely available implementations of HTTP/2 and eventually HTTP/3.


This is half truth at best. It's only a matter of time until HTTP/1.1 is forced out by the tech/browser oligopoly: just look where we are with plaintext HTTP now.

The reason for HTTP3 is marginal gains that only make sense at enormous scales for large operators. The rest of us pay for this with increased complexity.


> just look where we are with plaintext HTTP now.

Yes, and the upgrade to HTTPS has been an improvement for end-users.


And upgrade to HTTP3 is going to be even bigger improvement!

Except when you don't need it, but are stuck with obsolete ecosystem otherwise.


Fortunately, there is Cloudflare (incidently, also behind the http/3 push) who'll be happy to proxy your old site using http/3


3 has real value 2 is a waste of time. I will state with great confidence you won't see 2 in the wild in the next 5 years.


I dunno, the topic may be a bit more nuanced than that because the HTTP/2 upgrade is free (via ALPN, part of the TLS handshake) while there’s not yet any good way of starting on HTTP/3. See https://news.ycombinator.com/item?id=24855848 that I wrote 42 days ago on this very topic, with arguments in both directions.


> http/2 and up you speak of, I know only a single F/OSS one (ng-http2) actually used in server-side production.

I can think of, like, five off the top of my head (mod_http2, nginx, h2o/quicly, about a million go apps that use http2, Rust has a production HTTP/2 implementation or three, and Microsoft/Apple's implementations if you want ones on the client side.) It's not anyone's fault but your own if you can't look it up. There are 23 implementations in the QUIC interop matrix which are cross tested against each other as of now, too, and it wasn't hard to find: https://docs.google.com/spreadsheets/d/1D0tW89vOoaScs3IY9RGC... and several of those stacks also implement HTTP/2 as well.

It's not like the internet was some rosy garden in the HTTP/1.1 era where everything was magical and democratic and perfect. HTTP/1.1 is easy to implement wrong, and most people just used stock HTTP servers to front their application anyway regardless of the actual protocol spoken to the end user, which is how it's always been.

Besides, you don't have to actually have to be a megacorp to see the benefit of HTTP/2 or QUIC, you can just... try using your imagination. I have an actual real workload where I want to fetch potentially hundreds of metadata files from an HTTP server. HTTP/2 is a dramatic performance boost for workloads like this. It's not rocket science to see why, despite people wringing their hands about opening multiple parallel connections, etc.

> Similarly, F/OSS was too ubiquitous so we had to invent "the cloud".

You've got a lot of things very confused in your head, it seems. FOSS was never "ubiquitous" until recently, and it was only allowed that status because corporations decided they could make more money with it. They can also make money with proprietary software, so they do that too when possible. You seem to be implying the rise of FOSS was some kind of "outsider threat" to the system which needed to be suppressed, lest it make things "too good for us", and so it was then tragically coopted by Google. No, it was not; FOSS as a movement was always a captive animal from the very beginning and its viability was always at the mercy of corporations with mass market penetration and reach, not the other way around. It's not surprising it took off; it turns out "Don't pay people for their work and keep all the profits for yourself" is a tried-and-true corporate tactic for making money since basically forever.

Not that it's relevant to this thread, but the sooner the free software movement realizes it's completely failed, that it's never even truly had a chance at success, the sooner it'll be able to actually succeed at something.


mod_h2 use nghttp2, and so does h2o I believe.


> And make no mistake, TCP head-of-line blocking is a real issue on high-loss networks like the outskirts of wi-fi and cellular networks.

On high loss, and high speed networks. Think of wifi, or lte, with a very fat pipe after it.

On really slow network, there is not much difference.

Google's "real world" telemetry, and benchmarks shown HTTP2 as great, but it wasn't.

Opening multiple TCP connections may well be cheaper than dealing with all of that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: