Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a very Silicon Valley way to look at things. How much does that bandwidth cost on dial-up or 2G?


The client will kill the connection if it has the file cached, sooooo, not much.


I'm more concerned with a more "traditional" setup - say a festival providing WiFi to many people through limited upstream. Used to be, you could provide a caching proxy locally.

With the war on mitm, it's really hard to set up something that scales traffic in this way - even if the actual data requested by clients could readily scale.

I know it's a trade-off between security and features - but it still makes me sad.


It's 2G. By the time the cancel is received by the server, the server will have sent the resource, the bytes will have traveled and the user will be billed.


First you get a PUSH_PROMISE request that is a single frame. It's tiny.

That tells the client what the server wants to send.

The client can respond with a an RST_STREAM frame https://http2.github.io/http2-spec/#RST_STREAM Again, that's a single frame.

By design it's meant to be extremely small and quick even on high latency, and/or low bandwidth connections.


You imply that there is a delay between the promise and the push, but it is not necessarily so. In fact the promise and the data may be sent in the same packet.


The client can disable push, so if it's on 2G, it can avoid this issue entirely.


A copy of the spec can be found here:

https://http2.github.io/http2-spec/#PushResources

There's a few interesting things here that I want to point out: * "A client can request that server push be disabled" This is part an explicit parameter in the client request to a server for anything, https://http2.github.io/http2-spec/#SETTINGS_ENABLE_PUSH.

* "Pushed responses that are cacheable (see [RFC7234], Section 3) can be stored by the client, if it implements an HTTP cache. Pushed responses are considered successfully validated on the origin server (e.g., if the "no-cache" cache response directive is present ([RFC7234], Section 5.2.2)) while the stream identified by the promised stream ID is still open"

Note that pushed content first starts with a PUSH_PROMISE message to the client, which the client can decide on its own volition to reject. Note the spec for a PUSH_PROMISE frame is here, https://http2.github.io/http2-spec/#PUSH_PROMISE and it's extremely small. Even on 2G or dial-up it's by design negligible.

* "Once a client receives a PUSH_PROMISE frame and chooses to accept the pushed response, the client SHOULD NOT issue any requests for the promised response until after the promised stream has closed.

If the client determines, for any reason, that it does not wish to receive the pushed response from the server or if the server takes too long to begin sending the promised response, the client can send a RST_STREAM frame, using either the CANCEL or REFUSED_STREAM code and referencing the pushed stream's identifier. "

Wittingly or otherwise, your message comes across as "everyone on the standards boards are idiots, don't think about anything beyond the valley, and I'm smarter than they are." That's beyond ridiculous. The standard was designed by subject matter experts from right across the world, with interests in web technologies across all sorts of markets, including the developing nations where every single byte is important. There's a lot that has been designed in to the HTTP 2.0 specification to account for that and to explicitly try to improve end user experience under those conditions.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: