upvote
Not exactly the same, but on Windows if you use entirely Win32 calls you can avoid linking any C runtime library. Win32 is below the C standard library on Windows and the C runtime is optional.
reply
This is one of the cornerstones that guarantee Windows can easily upgrade the C runtime and make performance and security upgrades. Win32 APIs have a different function calling ABI too.

So only part of that gets "bloated" is Win32 API itself (which is spread across multiple DLLs and don't actually bloat RAM usage). Most of the time even those functions and structures are carefully designed to have some future-proofness but it is usual to see APIs like CreateFile, CreateFile2, CreateFile3. Internally the earlier versions are upgraded to call the latest version. So not so much bloating there either.

When the C runtime and the OS system calls are combined into the single binary like POSIX, it creates the ABI hell we're in with the modern Unix-likes. Either the OSes have to regularly break the C ABI compatibility for the updates or we have to live with terrible implementations.

GNU libc and Linux combo is particularly bad. On GNU/Linux (or any other current libc replacements), the dynamic loading is also provided by the C library. This makes "forever" binary file compatibility particularly tricky to achieve. Glibc broke certain games / Steam by removing some parts of their ELF implementation: https://sourceware.org/bugzilla/show_bug.cgi?id=32653 . They backed due to huge backlash from the community.

If "the year of Linux desktop" would ever happen, they need to either do an Android and change the definition of what a software package is, or split Glibc into 3 parts: syscalls, dynamic loader and the actual C library.

PS: There is actually a catch to your " C runtime is optional." argument. Microsoft still intentionally holds back the ability of compiling native ABI Windows programs without Visual Studio.

The structured exception handlers (equivalent of Windows for SIGILL, SIGBUS etc.. not for SIGINT or SIGTERM though) are populated by the object files from the C runtime libraries (called VCRuntime/VCStartup). So it is actually not possible to have official Windows binaries without MSVC or any other C runtime like Mingw-64 that provides those symbols. It looks like some developers in Microsoft wanted to open-source VCRuntime / VCStartup but it was ~vetoed~ not fully approved by some people: https://github.com/microsoft/STL/issues/4560#issuecomment-23... , https://www.reddit.com/r/cpp/comments/1l8mqlv/is_msvc_ever_g...

reply
> split Glibc into 3 parts: syscalls, dynamic loader and the actual C library.

What is left of the C standard library, if you remove syscall wrappers?

> ABI hell

Is that really the case? From my understanding the problem is more, that Linux isn't an OS, so you can't rely on any *.so being there.

reply
> > split Glibc into 3 parts: syscalls, dynamic loader and the actual C library.

> What is left of the C standard library, if you remove syscall wrappers?

Still quite a bit actually. Stuff like malloc, realloc, free, fopen, FILE, getaddrinfo, getlogin, math functions like cos, sin tan, stdatomic implementations, some string functions are all defined in C library. They are not direct system calls unlike: open, read, write, ioctl, setsockopt, capget, capset ....

> > ABI hell

> Is that really the case? From my understanding the problem is more, that Linux isn't an OS, so you can't rely on any *.so being there.

That's why I used more specific term GNU/Linux at the start. There is no guarantee of any .so file can be successfully loaded even if it is there. Glibc can break anything. With the Steam bug I linked this is exactly what happened. Shared object files were there, Glibc stopped supporting a certain ELF file field.

There is only and only one guarantee with Linux-based systems: syscalls (and other similar ways to talk with kernel like ioctl struct memory layouts etc) always keep working.

There is so much invisible dependence on Glibc behavior. Glibc also controls how the DNS works for the programs for example. That also needs to be split into a different library. Same for managing user info like `getlogin`. Moreover all this functionality is actually implemented as dynamic library plugins in Glibc (NSSwitch) that rely on ld.so that's also shipped by Glibc. It is literally a Medusa head of snakes that bite multiple tails. It is extremely hard to test ABI breakages like this.

reply
> malloc, realloc, free

Wrapper around sbrk, mmap, etc. whatever the modern variant is.

> fopen, FILE

Wrapper around open, write, read, close.

> stdatomic implementations

You can argue, these are wrappers around thread syscalls.

> math functions like cos, sin tan, some string functions are all defined in C library

True for these, but they are so small, they could just be inlined directly, on their own they wouldn't necessarily deserve a library.

> That's why I used more specific term GNU/Linux at the start.

While GNU/Linux does describe a complete OS, it doesn't describe any specific OS. Every Distro does it's own thing, so I think these is what you actually need to call an OS. But everything is built so that the user can take the control over the architecture and which components the OS consists of, so every installation can be a snowflake, and then it is technically its own OS.

I personally consider libc and the compiler (which both make a C implementation) to be part of the OS. I think this is both grounded in theory and in practice. Only in some weird middle ground between theory and practice you can consider them to not be.

reply
> malloc, realloc, free > Wrapper around sbrk, mmap, etc. whatever the modern variant is.

I don't think that's correct. While `malloc` uses `brk` syscall to allocate large memory areas, it uses non-trivial algorithms and data-structures to further divide that areas into smaller chunks which actually returned. Using syscall for every `malloc`/`free` is quite an overhead.

> fopen, FILE

> Wrapper around open, write, read, close.

They're not just wrappers. They implement internal buffering, some transformations (for example see "binary" mode, "text" mode.

> stdatomic implementations

> You can argue, these are wrappers around thread syscalls.

No, they're wrappers around compiler intrinsics which emit specific assembly instructions. At least for any sane architecture.

> I personally consider libc and the compiler (which both make a C implementation) to be part of the OS. I think this is both grounded in theory and in practice. Only in some weird middle ground between theory and practice you can consider them to not be.

C is used a lot in embedded projects. I even think that's the majority of C code nowadays. These projects usually don't use libc (as there's no operating system, so concept of file or process just doesn't make sense). So it's very important to separate C compiler and libc and C compiler must be able to emit code with zero dependencies.

reply
Yeah sure they do more things than only doing the syscall, that's the point of an abstraction. But they still provide the functionality of the syscalls, just in the abstraction that you want it to be exposed as in the programming language. That's what I would consider a wrapper.

> C is used a lot in embedded projects.

Sure that's a freestanding implementation, which primary distinction is that it doesn't rely on the libc. The notion of the libc being part of the OS in the wider sense, still holds water here, since here no OS corresponds to no libc.

reply
mmap and sbrk would be very poor implementations of malloc.
reply
We are talking of wrappers on top of mmap and sbrk. Of course you wouldn't use mmap and sbrk instead of the abstraction. It's really the same as the difference between fread and read.
reply
> Glibc broke certain games / Steam by removing some parts of their ELF implementation: https://sourceware.org/bugzilla/show_bug.cgi?id=32653 . They backed due to huge backlash from the community.

It would be better if you specified which part was removed: support for executable code on stack. This is used in 99% cases by malware so it is better to break 1% of broken programs and have other 99% run safer.

reply
The comments on that bug report mention several language runtimes getting broken. Preventing languages that are generally safer than C from working seems rather counterproductive to overall security.
reply
> If "the year of Linux desktop" would ever happen, they need to either do an Android and change the definition of what a software package is, or split Glibc into 3 parts: syscalls, dynamic loader and the actual C library.

The dynamic loader used to be its own library, FWIW. It got merged into the main one recently.

reply
I'm sick of glibc compatibility problems. Are there any recommended replacements?
reply
For non-graphical apps, you can link statically against musl to produce a binary that only depends on the Linux kernel version and not the version or type of libc on the system. You may take a performance hit as musl isn't optimized for speed, and a size hit for shipping your own libc, and a feature hit because musl is designed to be minimal, but for many command line tools all of these downsides are acceptable.
reply
.interp to a glibc/libc you ship or static linking. These days it’s probably faster (in dev time) to just run a container than setting up a bespoke interp and a parallel set of libraries (and the associated toolchain changes or binary patching needed to support it).
reply
Running a container is exactly my current solution as well.

Are there any other solutions that don't depend on glibc?

reply
Guix (and I assume Nix as well, but I only know Guix) can create a package that is completely self contained including glibc. You can even have it be an AppImage https://guix.gnu.org/manual/devel/en/html_node/Invoking-guix...
reply
Glibc is half of GNU/Linux. You can of course use another libc, but it will be a different OS.
reply
Yeah, even library loading relies on glibc, so we can't really escape glibc on GNU/Linux.
reply
I don't really know why people expect to be able to bypass the OS and not have problems. It seems to come from people who think a "Linux OS" only consists of the Linux kernel.
reply
I wonder if anyone implemented loading shared libraries without glibc? It shouldn't be that hard, just need to implement ELF parser and glibc-compatible relocation mechanism.
reply
I don't think nobody has done, that. It is just that vendoring your own OS comes with a lot of work.
reply
I think using syscalls directly is a worse idea than loading shared libraries, and new kernel features, like ALSA (audio playback), DRM (graphics rendering) and other use libraries instead of documenting syscalls and ioctls. This is better because it allows intercepting and subverting the calls, adding support for features even if the kernel doesn't support it, makes it easier to port code to other OSes, support different architectures (32-bit code on 64-bit kernel), and allows changing kernel interface without breaking anything. So Windows-style approach with system libraries is better in every aspect.
reply
I once wrote a liblinux project just for this!! It was indeed extremely fun. Details in my other comment:

https://news.ycombinator.com/item?id=45709141

I abandoned it because Linux itself now has a rich set of nolibc headers.

Now I'm working on a whole programming language based around this concept. A freestanding lisp interpreter targeting Linux directly with builtin system call support. The idea is to complete the interpreter and then write the standard library and Linux user space in lisp using the system calls.

It's been an amazing journey. It's incredible how far one can take this.

reply
I generally try to stay portable, but file descriptors are just to nice, to not use them.
reply
File descriptors are part of the linux syscall API, not libc. Are you thinking of FILE?
reply
The "syscall API" is part of libc too. The read syscall is a trap, you put arguments in the right registers and issue the correct instruction[1] to enter the kernel. That's not something that can be expressed in C. The read() function that your C code actually uses is a C function provided by the C library.

[1] "svc 0" on ARM, "int 0x80" on i386, etc...

reply
> That's not something that can be expressed in C.

I've often made the argument that compilers should add builtins for Linux system calls. Just emit code in the right calling convention and the system call instruction, and return the result. Even high level dynamic languages could have their JIT compilers generate this code.

I actually tried to hack a linux_system_call builtin into GCC at some point. Lost that work in a hard drive crash, sadly. The maintainers didn't seem too convinced in the mailing list so I didn't bother rewriting it.

> The read() function that your C code actually uses is a C function provided by the C library.

These are just magic wrapper functions. The actual Linux system call entry point is language agnostic, specified at the instruction architecture level, and is considered stable.

https://www.matheusmoreira.com/articles/linux-system-calls

This is different from other systems which force people to use the C library to interface with the kernel.

One of the most annoying things in the Linux manuals is they conflate the glibc wrappers with the actual system calls in Linux. The C library does a lot more than just wrap these things, they dynamically choose the best variants and even implement cancellation/interruption mechanisms. Separating the Linux behavior from libc behavior can be difficult, and in my experience requires reading kernel source code.

reply
> I've often made the argument that compilers should add builtins for Linux system calls. Just emit code in the right calling convention and the system call instruction, and return the result. Even high level dynamic languages could have their JIT compilers generate this code.

You can only do that, when you compile for a specific machine. In general you are compiling for some abstract notion of an OS. JITs always compile for the machine they are running on, so they don't have that problem. There is code, that is compiled directly to your syscalls specific to your machine, so that abstract code can use this. It's called libc for the C language.

> One of the most annoying things in the Linux manuals is they conflate the glibc wrappers with the actual system calls in Linux. The C library does a lot more than just wrap these things, they dynamically choose the best variants and even implement cancellation/interruption mechanisms. Separating the Linux behavior from libc behavior can be difficult, and in my experience requires reading kernel source code.

In my experience there are often detailed explanation in the notes section. From readv(2):

  NOTES
       POSIX.1  allows  an  implementation  to  place a limit on the number of
       items that can be passed in iov.  An implementation can  advertise  its
       limit  by  defining IOV_MAX in <limits.h> or at run time via the return
       value from sysconf(_SC_IOV_MAX).  On modern Linux systems, the limit is
       1024.  Back in Linux 2.0 days, this limit was 16.

   C library/kernel differences
       The  raw  preadv() and pwritev() system calls have call signatures that
       differ slightly from that of the corresponding GNU  C  library  wrapper
       functions  shown  in  the SYNOPSIS.  The final argument, offset, is un‐
       packed by the wrapper functions into two arguments in the system calls:

           unsigned long pos_l, unsigned long pos

       These arguments contain, respectively, the low order and high order  32
       bits of offset.

   Historical C library/kernel differences
       To  deal  with  the  fact  that IOV_MAX was so low on early versions of
       Linux, the glibc wrapper functions for readv() and  writev()  did  some
       extra  work  if  they  detected  that the underlying kernel system call
       failed because this limit was exceeded.  In the case  of  readv(),  the
       wrapper  function  allocated a temporary buffer large enough for all of
       the items specified by iov, passed that buffer in a  call  to  read(2),
       copied  data from the buffer to the locations specified by the iov_base
       fields of the elements of iov, and then freed the buffer.  The  wrapper
       function  for  writev()  performed the analogous task using a temporary
       buffer and a call to write(2).

       The need for this extra effort in the glibc wrapper functions went away
       with Linux 2.2 and later.  However, glibc continued to provide this be‐
       havior until version 2.10.  Starting with glibc version 2.9, the  wrap‐
       per  functions  provide  this behavior only if the library detects that
       the system is running a Linux kernel older than version 2.6.18 (an  ar‐
       bitrarily  selected  kernel  version).  And since glibc 2.20 (which re‐
       quires a minimum Linux kernel version of  2.6.32),  the  glibc  wrapper
       functions always just directly invoke the system calls.
reply
> You can only do that, when you compile for a specific machine.

You always compile for a specific machine. There is always a target instruction set architecture. It decides the calling convention used for Linux system calls. Compiler can even produce an error in case the target is not supported by Linux.

> In general you are compiling for some abstract notion of an OS.

This "abstract notion of an OS" boils down to the libc. Freestanding C gets rid of most of it. Making system calls is also perfectly valid in hosted C. Modern languages like Rust also have freestanding modes.

> In my experience there are often detailed explanation in the notes section.

That's the problem. Why is the Linux stuff just a bunch of footnotes in the Linux manual? It should be in the main section. The glibc specifics should be footnotes.

reply
Specific machine meaning defined set of installed software, versions in install locations.

Abstract notion of OS meaning Debian 12. Not Linux kernel commit ####, GNU libc commit ####, dpkg commit ####, apt commit ####, Apache httpd commit #### with patch ### to ### from Debian 4 version ### and Ubuntu 21 version ###, SQLite3 with special patches ### installed in /opt/bin/foo, ... (you get the idea).

> That's the problem. Why is the Linux stuff just a bunch of footnotes in the Linux manual? It should be in the main section. The glibc specifics should be footnotes.

Because you look at the OS manual, not at the documentation of the kernel. Notes and Bugs are also not footnotes in man pages. They are pretty important and are basically the first free-form section where you can tell about the ideas, ideals and history. The first part a pretty strict, formal description of the calling semantics.

reply
Let's systematize this.

Compilers build for target triples such as x86_64-linux-gnu. It is of the form isa-kernel-userspace. If kernel is linux, the builtin can be used. The isa determines the code generated by the compiler, both in general and for the builtin. The userspace can be anything at all, including none. Sometimes compilers build for target quadruples which also include a vendor, and that information is also irrelevant.

reply
I am not sure you understand my point. Inlining libc definitions for syscalls is fine when you only care about Debian 12 commit hash ####. It will break as soon as you think your machine is running Debian 12 and you updated it, so surely it includes the latest userspace-patches. It will also break when a user uses the OS configuration to change the behaviour of some OS functionality, but your code is oblivious to that matter, because your code bypasses the OS version of libc.

Modifying the OS is fine, if this is what you want to do, but it comes with tradeoffs.

----

You wrote earlier:

> actually tried to hack a linux_system_call builtin into GCC at some point. [...] The maintainers didn't seem too convinced in the mailing list so I didn't bother rewriting it.

I am not sure what exactly this means. There is syscall(2) in the libc, if you want to do this. If you want to inline the wrappers you can pass -static to the compiler invocation.

reply
> It will break

If it ever breaks, it's a bug in the Linux kernel.

> It will also break when a user uses the OS configuration to change the behaviour of some OS functionality

Can you give concrete examples of this?

> There is syscall(2) in the libc, if you want to do this.

I know. I've written my own syscall(), as well. The idea is to put it in the compiler as a builtin so there's no need to even write it.

reply
> If it ever breaks, it's a bug in the Linux kernel.

No, your program will still instruct the kernel to do the same. It will just cause conflicts with the other OS internals.

> Can you give concrete examples of this?

Adding another encoding as a gconv module. The DNS issues everyone is talking about.

I don't know what that gets you compared to using syscall(2) and -static. When you want your program to depend on the kernel API instead of the OS API, then you should really link libc statically.

reply
> It will just cause conflicts with the other OS internals.

But not with the kernel.

"Other OS internals" are just replaceable components. The idea is to depend on Linux only, not on Linux+glibc.

> Adding another encoding as a gconv module. The DNS issues everyone is talking about.

Those are glibc problems, not Linux problems. Linux does not perform name resolution or character encoding conversion.

reply
The libc syscall wrappers are part of the libc API, but on Linux, syscalls are part of the stable ABI and so you can freely do __asm__(...) to write your own version of syscall(2) and it is fully supported. Yeah, __asm__ is probably not in the C spec, but every compiler implements it...

For instance, Go directly calls Linux system calls without going through libc (which has lead to lots of workarounds to emulate some glibc-specific behaviour -- swings and roundabouts I guess...).

Other operating systems do not provide this kind of compatibility guarantee and instead require you to always go through libc as the syscall ABI is not stable (though ultimately, you can still use __asm__ if you so choose).

In any case, file descriptors are definitely not a libc construct on Linux.

reply
Yes, you can. Then you don't write against the OS, but against the kernel. It sometimes works, because the kernel is a separate project, it sometimes doesn't, you gave an example yourself.

> In any case, file descriptors are definitely not a libc construct on Linux.

File descriptors come definitely from the kernel, but they do also exist as a concept in libc, and I was referring to them as such. I was saying that I depend on non-portable libc functions, even though I value portability, because the API is just so nice. I did not want to indicate, that I am doing syscalls directly.

reply
syscalls are an implementation detail of some libc impls on some platforms, but the C spec does not mention syscalls.
reply
I did mean file descriptors.
reply
Then I'm confused by what you meant, because you can use fds with or without libc.
reply
I don't want to bypass libc in general, because I care about portability, but fds are just a nice interface, so I still use them instead of FILE, which would be the portable choice. My calls are still subject to OS choices, that differ from the kernel, since I don't bypass libc.
reply
Tons of driver code does this.
reply
You had me with “avoid C standard library” but lost me at “incoming Linux syscalls directly”.

Windows support is a requirement, and no WSL2 doesn’t count.

C standard library is pretty bad and it’d be great if not using it was a little easier and more common.

reply
Obviously only a requirement if you intend your software to run under windows. But if you don't, why bother. Not all software is intended to be distributed to users far and wide. Some of it is just for yourself, and some of it will only ever run on linux servers.
reply
> some of it will only ever run on linux servers.

I’ve spent quite a lot of time dealing with code that will ever run on Linux which did not in fact only ever run on Linux!

Obviously for hobby projects anyone can do what they want. But adult projects should support Windows imho and consider Windows support from the start. Cross-platform is super easy unless you choose to make it hard.

reply
> But adult projects should support Windows imho and consider Windows support from the start.

Hope whatever "adult" is working on the project this is getting paid handsomely. They'd certainly need to pay me big bucks to care about Windows support.

In any case, Linux system call ABI is becoming a lingua franca of systems programming. BSDs have implemented Linux system calls. Windows has straight up included Linux in the system. It looks like simply targeting Linux can easily result in a binary that actually does run anywhere.

reply
Try playing audio or displaying image on the screen using only documented syscalls. And make it work on all platforms you mentioned.
reply
Displaying an image on the screen is not that difficult a task. Linux has framebuffer device files. You open them, issue an ioctl to get metadata like screen geometry and color depth, then mmap the framebuffer as an array of pixels you can CPU render to. It's eerily similar to the way terminal applications work.

It's also possible to use Linux KMS/DRM without any user space libraries.

https://github.com/laxyyza/drmlist/

The problem with hardware accelerated rendering is much of the associated functionality is actually implemented in user space and therefore not part of the kernel. They unfortunately force the libc on us. One would have to reimplement things like Mesa in order to do this. Not impossible, just incredibly time consuming.

Things could have been organized in a way that makes this feasible. Example: SQLite. You can plug in your own memory allocation functions and VFS layer. I've been slowly porting the SQLite Unix VFS to freestanding Linux in order to use it in my freestanding applications.

reply
> Windows has straight up included Linux in the system. It looks like simply targeting Linux can easily result in a binary that actually does run anywhere.

Kind of. But not really. WSL2 is a thing. But most code isn’t running in WSL2 so if your thing “runs on windows” but requires running in a WSL2 context then oftentimes it might as well not exist.

> They'd certainly need to pay me big bucks to care about Windows support.

The great irony is that Windows is a much much much better and more pleasant dev environment. Linux is utterly miserable and it’s all modern programmers know. :(

reply
There is also WSL1 and Cygwin and MinGW/MSYS2.

And no WSL2 is not a newer version of WSL1, they are entirely different products.

reply
MinGW is awful. Avoid. Cygwin is honestly not really something that has come up in my career.

I don’t know why Linux people are so adamant to break their backs - and the backs of everyone around them - to try and do things TheLinuxWay. It’s weird. IMHo it’s far far far better and to take a “when in Rome” approach.

My experience is that Linux people are MUCH worse at refusing to take a When in Rome approach than the other way. The great tragedy is that the Linux way is not always the best way.

reply
I found MinGW to be quite nice, but ymmv.

> to try and do things TheLinuxWay

It's not really about TheLinuxWay. It's more that Microsoft completely lacks POSIX tools at all and the compiler needs to have a complete IDE installed, which I would need a license for, and the compiler invocation also doesn't really correspond to any other compiler.

reply
> Microsoft completely lacks POSIX tools

True!

> compiler needs to have a complete IDE installed

Not true. You can download just MSVC the toolchain sans IDE. Works great. https://stackoverflow.com/questions/76792904/how-to-install-...

> compiler invocation also doesn't really correspond to any other compiler

True. But you don’t have to use MSVC. You can just use Clang for everything.

Clang on Windows does typically use the Microsoft C++ standard library implementation. But that’s totally fine and won’t impact your invocation.

reply
But then I don't understand your complaints against MSYS2/MinGW. MSYS2 UCRT (the default environment) is a collection of POSIX tools and GCC to compile against the Microsoft C++ standard library. The only difference to what you tell me is completely fine is, that it uses GCC instead of Clang. Other MSYS2 environments are Clang instead of GCC.

MinGW is the open-source implementation of the Windows API, so that you can use the Microsoft C++ standard library, without needing to use the MS toolchain.

reply
Using MinGW and POSIX tools is trying to force a square Linux peg through a round Windows hole. You can try and force it if you want.

If you started with a native Windows-only project you would never use MinGW. Probably 0.01% of Windows projects use GCC.

Over the years I have come to associate “project uses MinGW” with “this probably take two days of my life to get running and I’m just going to hit hurdle after hurdle after hurdle”.

The whole Linux concept of a “dev environment” is kind of really bad and broken and is why everyone uses Docker or Linux or one of a dozen different mutually incompatible environments.

The actually correct thing to do is for projects to include their fucking dependencies so they JustWork without jumping through all these hoops.

reply
> Not true. You can download just MSVC the toolchain sans IDE. Works great.

How is the standalone MS build system called?

reply
The standalone IDE-less build tools comes with MsBuild.exe. So you just use that.
reply
I don't think we are talking about the same type of software? The type I was talking about will only ever run on Linux because it's a (HTTP-ish) server that will only ever run on Linux.

Probably a server that is only ever run by a single company on a single CPU type. That company will have complete control of the OS stack, so if it says no Windows, then no Windows has to be supported.

reply
I've worked on dozens of "adult" projects for 30 years, only 2 of which ever needed to run against the Win32 API, and only one of which ever ran on Windows. There's a whole world of people out there who don't care about Windows compatibility because it's usually not relevant to the work we do.
reply
deleted
reply
You can make CRT-free Win32 programs, read this guide[1] and you're all set. I've written a couple CLI utilities which are completely CRT-free and weigh just under a few kilobytes.

[1]: https://nullprogram.com/blog/2023/02/15/

reply
Almost freestanding. It still requires you to link against kernel32 and use the functions it provides. This is because issuing system calls directly to the Windows kernel is not supported. The kernel developers reserve the right to change things like system call numbers, so they can't be hardcoded into the application.
reply
Kernel32.dll is loaded into all Windows processes by default, so you actually can have a valid, working Windows binary with 0 entries in the import table. See here[1] for a "Hello world" program written as such.

[1]: https://gist.github.com/rfl890/195307136c7216cf243f7594832f4...

reply
That's interesting. How does it work?

  PEB *peb = (PEB *)__readgsqword(0x60);
    
  LIST_ENTRY *current_entry = peb->Ldr->InMemoryOrderModuleList.Flink->Flink;
It just obtains a pointer to the loader's data structures out of nowhere?

Is this actually supported by Microsoft or are people going to end up in a Raymond Chen article if they use this?

reply
It's in no way supported by Microsoft (and is flagged by most anti-viruses), it was just to demonstrate that kernel32.dll is available for "free" in all programs. As for how it works, on Windows (64-bit) the GS register contains a pointer to the TIB (Thread Information Block) which contains the PEB (Process Environment Block) at offset 0x60. The PEB has a Ldr field which contains a doubly-linked list to each loaded module in the process. From here I obtain the requested module's base address (here kernel32.dll), parse the PE headers to find the function's address and return it.
reply
> Almost freestanding. It still requires you to link against kernel32

Nitpick: the phrase “link against kernel32” feels like a Linux-ism. If you’re only calling a few function you need to load kernel32.dll and call some functions in it. But that’s a slightly different operation than linking against it. At least how I’ve always used the term link.

You’re not wrong in principle. But Linux and Windows do a lot of things differently wrt linking and loading libs. (I think Windows does it waaay better but ymmv)

reply
> (I think Windows does it waaay better but ymmv)

Can you elaborate on that?

Btw., I don't want to bash Windows here, I think the Windows core OS developers are (one of) the only good developers at Microsoft. The NT kernel is widely praised for its quality and the actual OS seems to be really solid. They just happen to also have lots of shitty company sections that release crappy software and bundle malware, ads and telemetry with the actual OS.

reply
Windows 11 Pro with O&O Shutup is perfectly fine. You’re not wrong and the trend is concerning.

But on the actual topic. I think “Linux” does a few things way worse. (Technically not Linux but GCC/Clang blah blah blah).

Linux does at least three dumb things. 1) Treat static/dynamic linking the same 2) No import line 3) global system shared libraries.

All three are bad. Shared/dynamkc libraries should be black boxes. Import libs are just objectively superior to the pure hell that is linking an old version of glibc. And big ball or global shared libraries is such a catastrophic failure that Docker was invented to hack around it.

reply
Can you write that so, that people who are dumb and don't know the Windows way also get it?
reply
> Treat static/dynamic linking the same

Imagine you have an executable with a random library that has a global variable. Now you have a shared/dynamic library that just so happens to use that library deep in its bowels. It's not in the public API, it's an implementation detail. Is the global variable shared across the exe and shared lib or not? On Linux it's shared, on Windows its not.

I think the Windows way is better. Things randomly breaking because different DLLs randomly used the same symbol under the hood is super dumb imho. Treating them as black boxes is better. IMHO. YMMV.

> No import lib (typo! lib, not line)

In Linux (not the kernal blah blah blah) when you link a shared library - like glibc - you typically link the actual shared library. So on your build machine you pass /path/to/glibc.so as an argument. Then when your program runs it dynamically loads whatever version of glibc.so is on that machine.

On Windows you don't link against foo.dll. Instead you link against a thin, small import lib called (ideally) foo.imp.lib.

This is better for a few reasons. For one, when you're building a program that intends to use a shared library you shouldn't actually require a full copy of that lib. It's strictly unnecessary by definition.

Linux (gcc/clang blah blah blah) makes it really hard to cross-compile and really hard to link against older versions of a library than is on your system. It should be trivial to link against glibc2.15 even if your system is on glibc2.40.

> global system shared libraries

The Linux Way is to install shared libraries into the global path. This way when openssl has a security vuln you only need to update one library instead of recompile all programs.

This architecture has proven - imho objectively - to be an abject and catastrophic failure. It's so bad that the world invented Docker so that a big complicated expensive slow packaging step has to be performed just to reliably run a program with all its dependencies.

Linux Dependency Hell is 100x worse than Windows DLL Hell. In Windows the Microsoft system libraries are ultra stable. And virtually nothing gets installed into the global path. Computer programs then simply include the DLLs and dependencies they need. Which is roughly what Docker does. But Docker comes with a lot of other baggage and complexity that honestly just isn't needed.

These are my opinions. They are not held by the majority of HN commenters. But I stand by all of them! Not mentioned is that Windows has significantly better profilers and debuggers than Linux. That may change in the next two years.

Also, super duper unpopular opinion, but bash sucks and any script longer than 10 lines should be written in a real language with a debugger.

reply
> On Linux it's shared, on Windows its not.

Yes, the default compiler invocation makes all symbols exported. But leaving it like that is super lazy, it will likely break things (like you wrote). You can change the default with -fvisibility=[default|internal|hidden|protected] and it's kind of expected that you do. Oh, and I just found out that GCC has -fvisibility-ms-compat, to make it work like the MS compiler.

> Instead you link against a thin, small import lib called (ideally) foo.imp.lib.

Interesting. How is that file created? Is it created automatically, when you build foo.dll? How is it shipped? Is it generally distributed with foo.dll, because then I don't really see the benefit of linking against foo2.15.imp.lib compared to foo2.15.dll.

> It should be trivial to link against glibc2.15 even if your system is on glibc2.40.

It don't know if you know that, but on Linux glibc2.40 is not really only version 2.40. It includes all the versions up to 2.40. When you link against a symbol that was last changed in 2.15, you link against glibc2.15, not against glibc2.40. If you only use symbols from glibc2.15, then you have effectively linked the complete program against glibc2.15.

But yes, enforcing this should be trivial. I think this a common complaint.

> The Linux Way is to install shared libraries into the global path.

Only in so far, as on Windows you put the libraries into 'C:\Program Files\PROGRAM\' and on Linux into '/usr/lib/PROGRAM/'. You of course shouldn't dump all your libraries into '/usr/lib'. That's different when you install a library by itself. I don't know how common that is on Windows?

I don't really know what problems you have in mind, but it seems like you think a program would have a dependency on 'libfoo.so', so at runtime it could randomly break by getting linked against another libfoo, that happens to be in the library path. But that is not the case, you link against '/usr/lib/foo.so.6'. Relying on runtime environment paths for linking is as bad as calling execve("bash foo") and this is a security bug. Paths are for the user, so that he doesn't need to specify the full path, not for programs to use for dependency management. Also when you don't want updates to minor versions, then you can link to '/usr/lib/foo.so.6.2'. And when you don't want bugfixes, you can link against '/usr/lib/foo.so.6.2.15', but that would be super dumb in my opinion. On Linux ABIs have there own versions differently from the library versions, I agree that this can be confusing for newcomers.

A fundamentally difference is also that there is a single entity controlling installation on Linux. It is the responsibility of the OS to install programs, bypassing that just creates a huge mess. I think that is the better way and both Apple and Microsoft are moving to that way, but likely for other reasons (corporate control). This doesn't mean, that the user can't install his own programs which aren't included in the OS repository. OS repository != OS package manager. I think when you can bother to create foo-installer.exe, you should also create foo.deb . Extracting foo.zip into C:\ is also a dumb idea, yet some people think it suddenly isn't dumb anymore when doing it on Linux.

PIP and similar projects are a bad idea, in my opinion. When someone wants to create their own package system breaking the OS, they should have at least the decency to roll it in /opt. Actually that is not a problem in Python proper. They have essentially solved that for decades and all that dance with venv, uv and what else is completely unnecessary. You can install different Python installation into the OS path. Python installs into /usr/bin/python3.x and creates /usr/lib/python3.x/ by default. Each python version will only use the appropriate libraries. That's my unpopular opinion. That mess is why Docker was created, but in my opinion that does not come from following the Linux way, but by actively sabotaging it.

> Also, super duper unpopular opinion, but bash sucks and any script longer than 10 lines should be written in a real language with a debugger.

Bash's purpose is to cobble programs together and setup pipes and process hierarchies and job control. It excels at this task. Using it for anything else sucks, but I don't think that is widely disputed.

reply
> You can change the default

My unfortunate experience is that changing the default just breaks other things.

I really blame C++ as the root evil. This type of behavior really really ought to be part of the language spec. It’s super weird that it’s not.

> How is [foo.imp.lib] file created?

When the DLL is compiled

> I don't really see the benefit of linking against foo2.15.imp.lib compared to foo2.15.dll

The short version is “because the whole file isn’t actually necessary”.

Zig moves mountains to make cross-compiling possible. Linux is BY FAR the hardest platform to crosscompile for. macOS and Linuxate trivial. Linux it’s alllllmost impossible. Part of their trick to make it possible is to generate stub .so files which are effectively import libs. Which is what should have been used all along! https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...

> When you link against a symbol that was last changed in 2.15, you link against glibc2.15, not against glibc2.40. If you only use symbols from glibc2.15, then you have effectively linked the complete program against glibc2.15.

It really really needs to be explicit. It’s otherwise impossible to control. And hard to understand where a newer symbol is coming from.

> on Windows you put the libraries into 'C:\Program Files\PROGRAM\'

It is relatively rare for a program in Program Files to add itself to the PATH.

> they should have at least the decency to roll it in /opt

I think folders like /opt and /usr/lib are pure evil. Programs should include their %{#^]{}^]+}*}^ dependencies.

uv solves a lot of the Python problems. Every project gets to define its own version of Python and own collection of libraries with whatever god forsaken version resolution. Having /usr/lib/python3.x is a failure state.

reply
Linux does none of those things. That's user space stuff. Linux loads your ELF and jumps to its entry point. That's it.

Linux is so great you're actually free to remake the entire user space in your image if you want. It's the only kernel that lets you do it, all the others force you to go through C library nonsense, including Windows.

The glibc madness you described is just a convention, kept in place by inertia. You absolutely can trash glibc if you want to. I too have a vision for Linux user space and am working towards realizing it. Nothing will happen unless someone puts the work in.

reply
Yes that’s all filed under blah blah blah.

Some people use “Linux” to exclusively refer to the Linux kernel. Most people do not.

reply
Linux by default does mean Linux kernel, but in my reply I didn't cared about that either. When all know what is meant, that is fine in my opinion.

I think it is important to have GNU/Linux in mind, because there are OSs that don't use glibc and work totally different, so none of your complaints apply. But yes, most people think of GNU/Linux, when you tell them about Linux.

It is also relevant to consider that there is no OS called GNU/Linux. The OSs are called Debian, Arch, OpenSuSE, Fedora, ... . It is fine for different OS to have differently working runtime linkers and installation methods, but some people act surprised when they find out ignoring that doesn't work.

reply
Loading means creating a memory image of the library. Linking means resolving the symbols to addresses within that memory image.

Loading a library and calling some functions from it is linking. The function pointer you receive is your link to the library function.

reply
You’re not wrong per se. But it was phrased in a very linuxy way imho.

> Linking means resolving the symbols to addresses within that memory image.

Well, you can call LoadLibrary and GetProcAddress. Which is arguably linking. But does not use the linker at link time. Although LoadLibrary is in kernel32!

reply
Linker is short for Link Loader, so I don't now what your definition of linking is, if it doesn't include loading.
reply
Great post!
reply
> Windows support is a requirement

Why, exactly?

reply
> Windows support is a requirement...

For what?

There is some software for which Windows support is required. There are others for which it is not, and never will be. (And for an article about running ELF files on RiscV with a Linux OS, the "Windows support" complaint seems a bit odd...)

reply
A requirement from whom? To do what?
reply
You can do this in Windows too, useful if you want tiny executables that use minimum resources.

I wrote this little systemwide mute utility for Windows that way, annoying to be missing some parts of the CRT but not bad, code here: https://github.com/pablocastro/minimute

reply
I thought windows had an unstable syscall interface?
reply
Pretty much yeah.

You have your usual Win32 API functions found in libraries like Kernel32, User32, and GDI32, but since after Windows XP, those don't actually make system calls. The actual system calls are found in NTDLL and Win32U. Lots of functions you can import, and they're basically one instruction long. Just SYSENTER for the native version, or a switch back to 64-bit mode for a WOW64 DLL. The names of the function always begin with Nt, like NtCreateFile. There's a corresponding Kernel mode call that starts with Zw instead, so in Kernel mode you have ZwCreateFile.

But the system call numbers used with SYSENTER are indeed reordered every time there's a major version change to Windows, so you just call into NTDLL or Win32U instead if you want to directly make a system call.

reply
It looks like that project does link against the usual Windows DLLs, it just doesn't use a static or dynamic C runtime.
reply
Windows isn’t quite like Linux in that typically apps don’t make syscalls directly. Maybe you could say what’s in ntdll is the system call contract, but in practice you call the subsystem specific API, typically the Win32 API, which is huge compared to the Linux syscall list because it includes all sorts of things like UI, COM (!), etc.

The project has some of the properties discussed above such as not having a typical main() (or winmain), because there’s no CRT to call it.

reply