One difference is that when your endian-oblivious code runs on a BE system, it can be subtly wrong in a way that's hard to diagnose, which is a whole lot worse than not working at all.
But for everything else, it's fine to assume little-endian.
You sound like some sort of purist, so sure, if you really want to be explicit and support both endiannesses in your software when needed, go for it. But as general advice to random programmers: don't bother.
u32::from_le_bytes
u32::from_ne_bytes the n stands for native
You're also the most likely person to try to run your code on an 18 bit machine.
Granted, I still work on a fair number of big endian systems even though my daily drivers (ppc64le, Apple silicon) are little.
How come you're running ppc64le as a daily driver?
> [fixes] specific to VMS (a.k.a. OpenVMS),
> For conformity with DECSYSTEM-20 Kermit ...
> running on a real Sun3, compiled with a non-ANSII compiler (Sun cc 1.22)
> this is fatal in HP-UX 10 with the bundled compiler
> OpenWatcom 1.9 compiler
> OS/2 builds
> making sure that all functions are declared in both ANSI format and K&R format (so C-Kermit can built on both new and old computers)
Oooooh! A clang complaint: 'Clang also complains about perfectly legal compound IF statements and/or complex IF conditions, and wants to have parens and/or brackets galore added for clarity. These statements were written by programmers who understood the rules of precedence of arithmetic and logical operators, and the code has been working correctly for decades.'
As of the fourth Beta, DECnet support has been re-enabled. To make LAT or CTERM connections you must have a licensed copy of Pathworks32 installed.
SSH is now supported on 32bit ARM devices (Windows RT) for the first time
REXX support has been extended to x86 systems running Windows XP or newer. This was previously an OS/2-only feature.
No legacy telnet encryption (no longer useful, but may return in a future release anyway)
For context:
The first new Kermit release for Windows in TWENTY-TWO YEARS
Yes, it's called Kermit 95 once again! K95 for short. 2025 is its 40th anniversary.
Many of the tests I did back in the 1990s seem pointless now. Do you have checks for non-IEEE 754 math?
Its one of the caveats of the C-family that developers are supposed to be aware of, but often aren't. It doesn't support IEEE 754 fully. There is a standard to do so, but no one has actually implemented it.
Of course in my case what I'm actually concerned with is the behavior surrounding inf and NaN. Thankfully I've never been forced to write code that relied on subtle precision or rounding differences. If it ever comes up I'd hope to keep it to a platform independent fixed point library.
But, for example, LLVM does not fully support IEEE 754 [0].
And nor does GCC - who list it as unsupported, despite defining the macro and having partial support. [1]
The biggest caveat is in Annex F of the C standard:
> The C functions in the following table correspond to mathematical operations recommended by IEC 60559. However, correct rounding, which IEC 60559 specifies for its operations, is not required for the C functions in the table.
The C++ standard [2] barely covers support, but if a type supports any of the properties of ISO 60559, then it gets is_iec559 - even if that support is _incomplete_.
This paper [3] is a much deeper dive - but the current state for C++ is worse than C. Its underspecified.
> When built with version 18.1.0 of the clang C++ compiler, without specifying any compiler options, the output is:
> distance: 0.0999999
> proj_vector_y: -0.0799999
> Worse, if -march=skylake is passed to the clang C++ compiler, the output is:
> distance: 0.1
> proj_vector_y: -0.08
[0] https://github.com/llvm/llvm-project/issues/17379
[1] https://www.gnu.org/software/gcc/projects/c-status.html
[2] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/n49...
Huh. https://en.cppreference.com/w/c/23.html says the "Old feature-test macro" __STDC_IEC_559__ was deprecated in C23, in favor of __STDC_IEC_60559_BFP__ .
Even for "regular" architectures this turns out to be important for FP data types. Long double is an f128 on Emscripten but an f80 on x86_64 Clang, where f128 is provided as __float128. The last time I updated my code (admittedly quite a while ago) Clang version 17 did not (yet?) implement std::numeric_limits support for f128.
Honestly there's no good reason not to test these sorts of assumptions when implementing low level utility functions because it's the sort of stuff you write once and then reuse everywhere forever.
It is _all_ non-IEEE 754 math.
That it isn't compliant is a compiler guarantee, in the current state of things.
You may as well have an `assert(1)`.
I don't support the full range of platforms that C supports. I assume 8 bit chars. I assume good hardware support for 754. I assume the compiler's documentation is correct when it says it map "double" to "binary64" and uses native operations. I assume if someone else compiles my code with non-754 flags, like fused multiply and add, then it's not a problem I need to worry about.
For that matter, my code doesn't deal with NaNs or inf (other than input rejection tests) so I don't even need fully conformant 754.
You wrote "I generally include various static asserts about basic platform assumptions."
I pointed out "There's platform and there's platform.", and mentioned that I assume POSIX.
So of course I don't test for CHAR_BIT as something other than 8.
If you want to support non-POSIX platform, go for it! But adding tests for every single one of the places where the C spec allows implementation defined behavior and where all the compilers I used have the same implementation defined behavior and have done so for years or even decades, seems quixotic to me so I'm not doing to do it.
And I doubt you have tests for every single one of those implementation-defined platform assumptions, because there are so many of them, and maintaining those tests when you don't have access to a platform with, say, 18-bit integers to test those tests, seems like it will end up with flawed tests.
No? I don't over generalize for features I don't use. I test to confirm the presence of the assumptions that I depend on. I want my code to fail to compile if my assumptions don't hold.
I don't recall if I verify CHAR_BIT or not but it wouldn't surprise me if I did.
I can't support IEEE 754, so its simply irrelevant - so long as I know I cannot support it, and behaviour differs.
My modern choice is just to make clear to BE users I don't support them, and while I will accept patches I'll make no attempt to bugfix for them, because every time I try to get a BE VM running a modern linux it takes a whole afternoon.
It's also increasingly hard to test. Particularly when you have large expensive testsuites which run incredibly slowly on this simulated machines.