Hacker News new | past | comments | ask | show | jobs | submit login

Unrelated, but I just want to rant.

I hate how the minimum possible websocket message is smaller than the largest possible websocket header. You can have an entire message that's 6 bytes, but you can also have a header that's 14 bytes.

I'm curious why they added support for massive messages (2^32+1). And I'm curious how much bandwidth would be "wasted" if instead of supporting a 6 byte header (for messages 0 - 125 bytes) and 10 byte header (for messages 126-2^32) they just made it fixed for 0-2^32 with a 9 byte header.




I think you are ignoring the TCP/IP overhead. Add:

22 bytes Ethernet Header

20 bytes IP Header

20-32 bytes TCP Header

16 bytes Ethernet Footer

That's 78 to 90 bytes extra - trying to optimise the websocket packet size is really not going to help much.

See https://www.researchgate.net/publication/269031593_Performan...


I'm trying to simplify parsing.writing by making it less variable. The spec is the one with the optimization to save 3 bytes for short messages. Why?


Ahhhhh, well, perfection is hard to achieve eh?

Off-topic, but all those little cuts do sting. Of course, if they did it your way, surely someone else would be ranting about why they wasted bytes unnecessarily.


What kind of optimization. From performance point of view 64bytes would be a cache line (on most architectures), so if the message fits there, it's good enough.


WebRTC is UDP based and can have a lot less overhead than this (with the obvious drawbacks).


They mention websocket which is TCP based, are are not talking about WebRTC.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: