upvote
What do you mean by “networking protocols,” exactly? Most packet level Internet protocols (TCP, UDP, etc.) are big endian. Ethernet is big endian at the octet level and little endian on the wire at the bit level. Network order is big endian because it has to be something and it’s easier to draw pictures as a matrix of bytes that are transmitted from left to right and top to bottom. There is no right answer to endianness. It’s like which side of the road cars should drive on. You just need to pick one and stick with it. Mostly people bitch about endianness when their processor is the opposite of whatever someone else picked. But processors are all over the map. IBM mainframes are big endian. Motorola 68k is big. HP PA-RISC is big. IBM Power started big and then went bi. MIPS is bi. RISC-V is little. ARM is bi but dominantly little (AArch64). And of course x86 is little. So, take your pick. That said, little endianness is the right answer as is driving on the right side of the road.
reply
> RISC-V is little

These days it's bi, actually :) Although I don't see any CPU designer actually implementing that feature, except maybe MIPS (who have stopped working on their own ISA, and now want all their locked-in customers to switch to RISC-V without worrying about endianness bugs)

reply
Well, sort of. Instruction fetch is always little-endian but data load/store can be flipped into big. But IIRC the standard profiles specify little, so it's pretty much always going to be little. But yea, technically speaking data load/store could be big. Maybe that's important for some embedded environments.
reply
> Well, sort of. Instruction fetch is always little-endian but data load/store can be flipped into big

ARM works the same way. And SPARC is the opposite, instructions are always big-endian, but data can be switched to little-endian.

reply
You should actually not use format-swapping operations.

You should actually use format-swapping loads/stores (i.e deserialization/serialization).

This is because your computer can not compute on values of non-native endianness. As such, the value is logically converted back and forth on every operation. Of course, a competent optimizer can elide these conversions, but such actions fundamentally lack machine sympathy.

The better model is viewing the endianness as a serialization format and converting at the boundaries of your compute engine. This ensures you only need to care about endianness when serializing and deserializing wire formats and that you have no accidental mixing of formats in your internals; everything has been parsed to native before any computation occurs.

Essentially, non-native endianness should only exist in memory and preferably only memory filled in by the outside world before being parsed.

reply