Hacker News new | past | comments | ask | show | jobs | submit login

I wholeheartedly agree. To add a bit more carrot for people on the fence, unlike TCP/UDP sockets, you don't have to be concerned about endianness because the bits never leave the machine. For a lot of use-cases, this is a big win that doesn't get as much appreciation as I think it should.



> unlike TCP/UDP sockets, you don't have to be concerned about endianness because the bits never leave the machine

Why would you be concerned about endianness even if you're using TCP or UDP, if you control the protocol?

Little endian won; in 99.9% of cases there's no reason to use big endian unless you're implementing an existing protocol which explicitly mandates big endian. Doing something over the network doesn't magically make it better to use big endian, and the whole "network endian = big endian" is just silly.


I just use little endian everywhere. I figure unit tests will catch it if I screw up


This is reasonable, but a bit unfortunate specifically for TCP/UDP. The extremely strong convention is that all network protocols are "network byte format" which is Big-Endian.

Obviously no issue for local IPC, but when things become published network protocols, little-endian makes me sad.


> This is reasonable, but a bit unfortunate specifically for TCP/UDP. The extremely strong convention is that all network protocols are "network byte format" which is Big-Endian.

This is really only relevant for packet headers, tho. What goes into payload is entirely up to the developer. There is no specific reason to keep endianness the same between TCP/UDP and its payload. As long as it's clear which one to use.

RFC1700 is marked as obsolete. IIRC, network order is big-endian is because it allowed faster packet switching? But we're at the point where I can do dual-border routing with DPI orders of magnitude faster than switches from that era can do basic switching.

Some people choose little-endian for payload because they are:

a) unaware of endianness and will discover a funny thing when they suddenly start talking between BE and LE nodes. Rust forces you to choose when you write or read numbers.

b) Think since their CPU is LE then payload should be LE as well for performance. This is false because even on ancient hardware this would not take more than 3 cycles (in a context of network communication shaving off 3 cycles is a hilarious endeavor)

c) Aware of all of this and choose LE because they felt like it.


> Think since their CPU is LE then payload should be LE as well for performance. This is false because even on ancient hardware this would not take more than 3 cycles (in a context of network communication shaving off 3 cycles is a hilarious endeavor)

Having to do BE/LE conversions means that you can't just reinterpret cast your network buffers into whatever is the native application message type. Yes, there are ways around that (by wrapping every integer field with an endian converting accessor), but it is a significant amount of yak shaving.


Well, the native application type depends on what platform that application is running. That means LE in 99.9% of cases. Don't need to sell LE for payload to me, I'm already sold. It's people that think network byte order has anything to do with payload that are confused.


Yeah I like the idea that my code could compile down to memcpy, even though maybe the compiler may not choose to do it




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: