Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The client will kill the connection if it has the file cached, sooooo, not much.


I'm more concerned with a more "traditional" setup - say a festival providing WiFi to many people through limited upstream. Used to be, you could provide a caching proxy locally.

With the war on mitm, it's really hard to set up something that scales traffic in this way - even if the actual data requested by clients could readily scale.

I know it's a trade-off between security and features - but it still makes me sad.


It's 2G. By the time the cancel is received by the server, the server will have sent the resource, the bytes will have traveled and the user will be billed.


First you get a PUSH_PROMISE request that is a single frame. It's tiny.

That tells the client what the server wants to send.

The client can respond with a an RST_STREAM frame https://http2.github.io/http2-spec/#RST_STREAM Again, that's a single frame.

By design it's meant to be extremely small and quick even on high latency, and/or low bandwidth connections.


You imply that there is a delay between the promise and the push, but it is not necessarily so. In fact the promise and the data may be sent in the same packet.


The client can disable push, so if it's on 2G, it can avoid this issue entirely.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: