I’ve spent quite a lot of time dealing with code that will ever run on Linux which did not in fact only ever run on Linux!
Obviously for hobby projects anyone can do what they want. But adult projects should support Windows imho and consider Windows support from the start. Cross-platform is super easy unless you choose to make it hard.
Hope whatever "adult" is working on the project this is getting paid handsomely. They'd certainly need to pay me big bucks to care about Windows support.
In any case, Linux system call ABI is becoming a lingua franca of systems programming. BSDs have implemented Linux system calls. Windows has straight up included Linux in the system. It looks like simply targeting Linux can easily result in a binary that actually does run anywhere.
It's also possible to use Linux KMS/DRM without any user space libraries.
https://github.com/laxyyza/drmlist/
The problem with hardware accelerated rendering is much of the associated functionality is actually implemented in user space and therefore not part of the kernel. They unfortunately force the libc on us. One would have to reimplement things like Mesa in order to do this. Not impossible, just incredibly time consuming.
Things could have been organized in a way that makes this feasible. Example: SQLite. You can plug in your own memory allocation functions and VFS layer. I've been slowly porting the SQLite Unix VFS to freestanding Linux in order to use it in my freestanding applications.
Kind of. But not really. WSL2 is a thing. But most code isn’t running in WSL2 so if your thing “runs on windows” but requires running in a WSL2 context then oftentimes it might as well not exist.
> They'd certainly need to pay me big bucks to care about Windows support.
The great irony is that Windows is a much much much better and more pleasant dev environment. Linux is utterly miserable and it’s all modern programmers know. :(
And no WSL2 is not a newer version of WSL1, they are entirely different products.
I don’t know why Linux people are so adamant to break their backs - and the backs of everyone around them - to try and do things TheLinuxWay. It’s weird. IMHo it’s far far far better and to take a “when in Rome” approach.
My experience is that Linux people are MUCH worse at refusing to take a When in Rome approach than the other way. The great tragedy is that the Linux way is not always the best way.
> to try and do things TheLinuxWay
It's not really about TheLinuxWay. It's more that Microsoft completely lacks POSIX tools at all and the compiler needs to have a complete IDE installed, which I would need a license for, and the compiler invocation also doesn't really correspond to any other compiler.
True!
> compiler needs to have a complete IDE installed
Not true. You can download just MSVC the toolchain sans IDE. Works great. https://stackoverflow.com/questions/76792904/how-to-install-...
> compiler invocation also doesn't really correspond to any other compiler
True. But you don’t have to use MSVC. You can just use Clang for everything.
Clang on Windows does typically use the Microsoft C++ standard library implementation. But that’s totally fine and won’t impact your invocation.
MinGW is the open-source implementation of the Windows API, so that you can use the Microsoft C++ standard library, without needing to use the MS toolchain.
If you started with a native Windows-only project you would never use MinGW. Probably 0.01% of Windows projects use GCC.
Over the years I have come to associate “project uses MinGW” with “this probably take two days of my life to get running and I’m just going to hit hurdle after hurdle after hurdle”.
The whole Linux concept of a “dev environment” is kind of really bad and broken and is why everyone uses Docker or Linux or one of a dozen different mutually incompatible environments.
The actually correct thing to do is for projects to include their fucking dependencies so they JustWork without jumping through all these hoops.
How is the standalone MS build system called?
Probably a server that is only ever run by a single company on a single CPU type. That company will have complete control of the OS stack, so if it says no Windows, then no Windows has to be supported.
[1]: https://gist.github.com/rfl890/195307136c7216cf243f7594832f4...
PEB *peb = (PEB *)__readgsqword(0x60);
LIST_ENTRY *current_entry = peb->Ldr->InMemoryOrderModuleList.Flink->Flink;
It just obtains a pointer to the loader's data structures out of nowhere?Is this actually supported by Microsoft or are people going to end up in a Raymond Chen article if they use this?
Nitpick: the phrase “link against kernel32” feels like a Linux-ism. If you’re only calling a few function you need to load kernel32.dll and call some functions in it. But that’s a slightly different operation than linking against it. At least how I’ve always used the term link.
You’re not wrong in principle. But Linux and Windows do a lot of things differently wrt linking and loading libs. (I think Windows does it waaay better but ymmv)
Can you elaborate on that?
Btw., I don't want to bash Windows here, I think the Windows core OS developers are (one of) the only good developers at Microsoft. The NT kernel is widely praised for its quality and the actual OS seems to be really solid. They just happen to also have lots of shitty company sections that release crappy software and bundle malware, ads and telemetry with the actual OS.
But on the actual topic. I think “Linux” does a few things way worse. (Technically not Linux but GCC/Clang blah blah blah).
Linux does at least three dumb things. 1) Treat static/dynamic linking the same 2) No import line 3) global system shared libraries.
All three are bad. Shared/dynamkc libraries should be black boxes. Import libs are just objectively superior to the pure hell that is linking an old version of glibc. And big ball or global shared libraries is such a catastrophic failure that Docker was invented to hack around it.
Imagine you have an executable with a random library that has a global variable. Now you have a shared/dynamic library that just so happens to use that library deep in its bowels. It's not in the public API, it's an implementation detail. Is the global variable shared across the exe and shared lib or not? On Linux it's shared, on Windows its not.
I think the Windows way is better. Things randomly breaking because different DLLs randomly used the same symbol under the hood is super dumb imho. Treating them as black boxes is better. IMHO. YMMV.
> No import lib (typo! lib, not line)
In Linux (not the kernal blah blah blah) when you link a shared library - like glibc - you typically link the actual shared library. So on your build machine you pass /path/to/glibc.so as an argument. Then when your program runs it dynamically loads whatever version of glibc.so is on that machine.
On Windows you don't link against foo.dll. Instead you link against a thin, small import lib called (ideally) foo.imp.lib.
This is better for a few reasons. For one, when you're building a program that intends to use a shared library you shouldn't actually require a full copy of that lib. It's strictly unnecessary by definition.
Linux (gcc/clang blah blah blah) makes it really hard to cross-compile and really hard to link against older versions of a library than is on your system. It should be trivial to link against glibc2.15 even if your system is on glibc2.40.
> global system shared libraries
The Linux Way is to install shared libraries into the global path. This way when openssl has a security vuln you only need to update one library instead of recompile all programs.
This architecture has proven - imho objectively - to be an abject and catastrophic failure. It's so bad that the world invented Docker so that a big complicated expensive slow packaging step has to be performed just to reliably run a program with all its dependencies.
Linux Dependency Hell is 100x worse than Windows DLL Hell. In Windows the Microsoft system libraries are ultra stable. And virtually nothing gets installed into the global path. Computer programs then simply include the DLLs and dependencies they need. Which is roughly what Docker does. But Docker comes with a lot of other baggage and complexity that honestly just isn't needed.
These are my opinions. They are not held by the majority of HN commenters. But I stand by all of them! Not mentioned is that Windows has significantly better profilers and debuggers than Linux. That may change in the next two years.
Also, super duper unpopular opinion, but bash sucks and any script longer than 10 lines should be written in a real language with a debugger.
Yes, the default compiler invocation makes all symbols exported. But leaving it like that is super lazy, it will likely break things (like you wrote). You can change the default with -fvisibility=[default|internal|hidden|protected] and it's kind of expected that you do. Oh, and I just found out that GCC has -fvisibility-ms-compat, to make it work like the MS compiler.
> Instead you link against a thin, small import lib called (ideally) foo.imp.lib.
Interesting. How is that file created? Is it created automatically, when you build foo.dll? How is it shipped? Is it generally distributed with foo.dll, because then I don't really see the benefit of linking against foo2.15.imp.lib compared to foo2.15.dll.
> It should be trivial to link against glibc2.15 even if your system is on glibc2.40.
It don't know if you know that, but on Linux glibc2.40 is not really only version 2.40. It includes all the versions up to 2.40. When you link against a symbol that was last changed in 2.15, you link against glibc2.15, not against glibc2.40. If you only use symbols from glibc2.15, then you have effectively linked the complete program against glibc2.15.
But yes, enforcing this should be trivial. I think this a common complaint.
> The Linux Way is to install shared libraries into the global path.
Only in so far, as on Windows you put the libraries into 'C:\Program Files\PROGRAM\' and on Linux into '/usr/lib/PROGRAM/'. You of course shouldn't dump all your libraries into '/usr/lib'. That's different when you install a library by itself. I don't know how common that is on Windows?
I don't really know what problems you have in mind, but it seems like you think a program would have a dependency on 'libfoo.so', so at runtime it could randomly break by getting linked against another libfoo, that happens to be in the library path. But that is not the case, you link against '/usr/lib/foo.so.6'. Relying on runtime environment paths for linking is as bad as calling execve("bash foo") and this is a security bug. Paths are for the user, so that he doesn't need to specify the full path, not for programs to use for dependency management. Also when you don't want updates to minor versions, then you can link to '/usr/lib/foo.so.6.2'. And when you don't want bugfixes, you can link against '/usr/lib/foo.so.6.2.15', but that would be super dumb in my opinion. On Linux ABIs have there own versions differently from the library versions, I agree that this can be confusing for newcomers.
A fundamentally difference is also that there is a single entity controlling installation on Linux. It is the responsibility of the OS to install programs, bypassing that just creates a huge mess. I think that is the better way and both Apple and Microsoft are moving to that way, but likely for other reasons (corporate control). This doesn't mean, that the user can't install his own programs which aren't included in the OS repository. OS repository != OS package manager. I think when you can bother to create foo-installer.exe, you should also create foo.deb . Extracting foo.zip into C:\ is also a dumb idea, yet some people think it suddenly isn't dumb anymore when doing it on Linux.
PIP and similar projects are a bad idea, in my opinion. When someone wants to create their own package system breaking the OS, they should have at least the decency to roll it in /opt. Actually that is not a problem in Python proper. They have essentially solved that for decades and all that dance with venv, uv and what else is completely unnecessary. You can install different Python installation into the OS path. Python installs into /usr/bin/python3.x and creates /usr/lib/python3.x/ by default. Each python version will only use the appropriate libraries. That's my unpopular opinion. That mess is why Docker was created, but in my opinion that does not come from following the Linux way, but by actively sabotaging it.
> Also, super duper unpopular opinion, but bash sucks and any script longer than 10 lines should be written in a real language with a debugger.
Bash's purpose is to cobble programs together and setup pipes and process hierarchies and job control. It excels at this task. Using it for anything else sucks, but I don't think that is widely disputed.
My unfortunate experience is that changing the default just breaks other things.
I really blame C++ as the root evil. This type of behavior really really ought to be part of the language spec. It’s super weird that it’s not.
> How is [foo.imp.lib] file created?
When the DLL is compiled
> I don't really see the benefit of linking against foo2.15.imp.lib compared to foo2.15.dll
The short version is “because the whole file isn’t actually necessary”.
Zig moves mountains to make cross-compiling possible. Linux is BY FAR the hardest platform to crosscompile for. macOS and Linuxate trivial. Linux it’s alllllmost impossible. Part of their trick to make it possible is to generate stub .so files which are effectively import libs. Which is what should have been used all along! https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
> When you link against a symbol that was last changed in 2.15, you link against glibc2.15, not against glibc2.40. If you only use symbols from glibc2.15, then you have effectively linked the complete program against glibc2.15.
It really really needs to be explicit. It’s otherwise impossible to control. And hard to understand where a newer symbol is coming from.
> on Windows you put the libraries into 'C:\Program Files\PROGRAM\'
It is relatively rare for a program in Program Files to add itself to the PATH.
> they should have at least the decency to roll it in /opt
I think folders like /opt and /usr/lib are pure evil. Programs should include their %{#^]{}^]+}*}^ dependencies.
uv solves a lot of the Python problems. Every project gets to define its own version of Python and own collection of libraries with whatever god forsaken version resolution. Having /usr/lib/python3.x is a failure state.
Linux is so great you're actually free to remake the entire user space in your image if you want. It's the only kernel that lets you do it, all the others force you to go through C library nonsense, including Windows.
The glibc madness you described is just a convention, kept in place by inertia. You absolutely can trash glibc if you want to. I too have a vision for Linux user space and am working towards realizing it. Nothing will happen unless someone puts the work in.
Some people use “Linux” to exclusively refer to the Linux kernel. Most people do not.
I think it is important to have GNU/Linux in mind, because there are OSs that don't use glibc and work totally different, so none of your complaints apply. But yes, most people think of GNU/Linux, when you tell them about Linux.
It is also relevant to consider that there is no OS called GNU/Linux. The OSs are called Debian, Arch, OpenSuSE, Fedora, ... . It is fine for different OS to have differently working runtime linkers and installation methods, but some people act surprised when they find out ignoring that doesn't work.
Loading a library and calling some functions from it is linking. The function pointer you receive is your link to the library function.
> Linking means resolving the symbols to addresses within that memory image.
Well, you can call LoadLibrary and GetProcAddress. Which is arguably linking. But does not use the linker at link time. Although LoadLibrary is in kernel32!
Why, exactly?
For what?
There is some software for which Windows support is required. There are others for which it is not, and never will be. (And for an article about running ELF files on RiscV with a Linux OS, the "Windows support" complaint seems a bit odd...)
I wrote this little systemwide mute utility for Windows that way, annoying to be missing some parts of the CRT but not bad, code here: https://github.com/pablocastro/minimute
You have your usual Win32 API functions found in libraries like Kernel32, User32, and GDI32, but since after Windows XP, those don't actually make system calls. The actual system calls are found in NTDLL and Win32U. Lots of functions you can import, and they're basically one instruction long. Just SYSENTER for the native version, or a switch back to 64-bit mode for a WOW64 DLL. The names of the function always begin with Nt, like NtCreateFile. There's a corresponding Kernel mode call that starts with Zw instead, so in Kernel mode you have ZwCreateFile.
But the system call numbers used with SYSENTER are indeed reordered every time there's a major version change to Windows, so you just call into NTDLL or Win32U instead if you want to directly make a system call.
The project has some of the properties discussed above such as not having a typical main() (or winmain), because there’s no CRT to call it.