Hacker News new | past | comments | ask | show | jobs | submit login

> But if this gets improved at the transport layer, seems like a win.

What do you mean? TCP and HTTP is already designed for slow links with packet loss, it’s old reliable tech from before modern connectivity. You just have to not pull in thousands of modules in the npm dep tree and add 50 microservice bloatware, ads and client side “telemetry”. You set your cache-control headers and etags, and for large downloads you’ll want range requests. Perhaps some lightweight client side retry logic in case of PWAs. In extreme cases like Antarctica maybe you’d tune some tcp kernel params on the client to reduce RTTs under packet loss. There is nothing major missing from the standard decades old toolbox.

Of course it’s not optimal, the web isn’t perfect for offline hybrid apps. But for standard things like reading the news, sending email, chatting, you’ll be fine.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: