upvote
> glibc dependencies that can't be resolved because you need two different versions simultaneously in the same build somehow...

If you somehow experience an actual dependency issue that involves glibc itself, I'd like to hear about it. Because I don't think you ever will. The glibc people are so serious about backward and forward compatibility, you can in fact easily look up the last time they broke it: https://lwn.net/Articles/605607/

Now, if you're saying it's a dependency issue resulting from people specifying wrong glibc version constraints in their build… yeah, sure. I'm gonna say that happens because people are getting used to pinning dependency versions, which is so much the wrong thing to do with glibc it's not even funny anymore. Just remove the glibc pins if there are any.

As far as the toolchain as a whole is concerned… GCC broke compatibility a few times, mostily in C++ due to having to rework things to support newer C++ standards, but I vaguely remember there was a C ABI break somewhere on some architecture too.

reply
When was the last time you actually used. NET? Because that's absolutely not how it is. The. NET runtime is shipped by default with Windows and updated via WU. Let alone that you're talking about .NET Framework which has been outdated for years.
reply
.NET runtime is not shipped with Windows, but once installed can be updated by WU.

Only the latest .NET Framework 4.8 is shipped with Windows at this point.

reply
The issue is in supporting older windows versions - which sadly is still a reality for most large-scale app developers.
reply
https://github.com/dotnet/core/blob/main/release-notes/10.0/...

.NET 10 supports a Windows 10 build from 10 years ago.

reply
Yes and in the wild believe it or not you'll find windows 7 and windows 8.

We had just deprecated support for XP in 2020 - this was for a relatively large app publisher ~10M daily active users on windows. The installer was a c++ stub which checked the system's installed .NET versions and manually wrote the app.config before starting the .net wrapper (or tried to install portable .NET framework installer if it wasn't found at all).

The app supported .NET 3.5* (2.0 base) and 4 originally, and the issue was there was a ".NET Framework Client Profile" install on as surprising amount of windows PCs out there, and that version was incompatible with the app. If you just have a naked .NET exe, when you launch it (without an app.config in the current folder) the CLR will decide which version to run your app in - usually the "highest" version if several are detected... which in this case would start the app in the lightweight version and error out. Also, in the app.config file you can't tell it to avoid certain versions you basically just say "use 4 then 2" and you're up to the mercy of the CLR to decide which environment it starts you in.

This obviated overrides in a static/native c++ stub that did some more intelligent verifications first before creating a tailored app.config and starting the .net app.

reply
Hey I have a PC running 98SE ;-)

I feel for those who have to support an OS no longer supported by the vendor. That's a tough position to be in, not only if a customer comes across a bug that is due to the OS, but it keeps you from advancing your desktop application forward.

reply
.NET versions are faster outdated then .Net Framework 4.8
reply
Point? I’m SRE on .Net project, we have been through 6-8-10 and its cost us about 2ish hours of work each time. As long as you don’t get crazy, .Net upgrades is just matter of new SDK and runtime and away you go.
reply
You're talking about .net for server applications right? The discussion above is for client apps being distributed for windows endusers.
reply
Just ship a self contained build?
reply
We have a small MAUI part of the application, it's not massive but it's working fine with .Net Upgrades.
reply
A .net framework 4.8 app has zero hours of work.

Why is it ok that you have to invest 2 times number of apps hours just because MS has such a short life cycle for its .NET versions.

reply
Which has been fixed on .NET 5 and later.

.NET Framework should only be used for legacy applications.

Unfortunately there are still many around that depend on .NET Framework.

reply
Since .NET 10 still doesn't support Type Libraries quite a few new Windows projects must be written in .NET Framework.

Microsoft sadly doesn't prioritize this so this might still be the case for a couple of years.

One thing I credit MS for is that they make it very easy to use modern C# features in .NET Framework. You can easily write new Framework assemblies with a lot of C# 14 features. You can also add a few interfaces and get most of it working (although not optimized by the CLR, e.g. Span). For an example see this project: https://www.nuget.org/packages/PolySharp/

It's also easy to target multiple framework with the same code, so you can write libraries that work in .NET programs and .NET Framework programs.

reply
Most likely never will, because WinRT is the future and WinRT has replaced type libraries with .NET metadata. At least from MS point of view.

The current solution is to use the CLI tools just like C++.

However have you looked into ComWrappers introduced in .NET 8, with later improvements?

I still see VB 6 and Delphi as the best development experience for COM, in .NET it wasn't never that great, there are full books about doing COM in .NET.

reply
.Net Framework 4.8 has a longer life cycle as the current .NET version
reply
When I first worked with dot NET I was confused with the naming and version numbers.
reply
This argument against .NET annoys me.

Because that’s pretty much any freaking thing - oh Python, oh PHP, oh driving a fork lift, oh driving a car.

Once you invest time in using and learning it is non issue.

I do get pissed off when I want to use some Python lib bit it just doesn’t work out of the box, but there is nothing that works out the box without investing some time.

Just like a car get a teenager into a car he will drive into first tree.

Posting BS on Facebook shouldn’t be benchmark for how easy things should be.

reply
It does, but current versions can be shipped with the application.

Thus this should be less of a problem.

reply
.NET Framework 5 or .NET Core 5?
reply
There is no .NET Framework 5. .NET Core 5 is just .NET 5.
reply
Well, traditionally, there was no Python/pip, JS/npm in Linux development, and for C/C++ development, the package manager approach worked surprisingly well for a long time.

However, there were version problems: some Linux distributions had only stable packages and therefore lacked the latest updates, and some had problems with multiple versions of the same library. This gave rise to the language-specific package managers. It solved one problem but created a ton of new ones.

Sometimes I wish we could just go back to system package managers, because at times, language-specific package managers do not even solve the version problem, which is their raison d'être.

reply
Nix devShells works quite well for Python development (don't know about JS) Nixpkgs is also quite up to date. I haven't looked back, since adopting Nix for my dev environments.
reply
This is one of the things that tilts me about C and C++ that has nothing to do with mem safety: The compile/build UX is high friction. It's a mess for embedded (No GPOS) too in comparison to rust + probe-rs.
reply
That hasn't been my experience at all. Cross-compiling anything on Rust was an unimaginable pain (3 years or so ago). While GCCs approach of having different binaries with different targets does have its issues, cross compiling just works.
reply
Ah sorry. I should clarify. Not referring to specifically cross compiling; just general compiling. In rust weather PC or embedded, I run Cargo run. For C or C++, it's who knows. A provincial set of steps for each project, error messages, makes me get frustrated. I keep a set of notes for each one I touch to supplement the project's own docs. I am maybe too dumb or inexperienced in some cases, but I am having a hard time understanding why someone would design that as the UX.

I want to focus on the project itself; not jump through hoops in the build process. It feels hostile.

For cross compiling to ARM from a PC in rust in particular, you do one CLI cmd to add the target. Then cargo run, and it compiles, flashes, with debug output.

These are from anecdotes. I am probably doing something wrong, but it is my experience so far.

reply
That sounds like you don't have a build system for C/C++.
reply
deleted
reply
.net has been able to ship the runtime with your app for years.
reply
I went from POP OS (Ubuntu) to EndeavourOS (Arch) Linux because some random software with an appimage or whatever refused to run with Ubuntus “latest” GLIBC and it ticked me off, I just want to run more modern tooling, havent had any software I couldnt just run on Arch, going on over a year now.
reply
Indeed. As late as 2 hours ago I had to change the way I build a private Tauri 2.0 app (bundled as .AppImage) because it wouldn't work on latest Kubuntu, but worked on Fedora and EndeavourOS. So now I have to build it on Ubuntu 22.04 via Docker. Fun fun.

Had fewer issues on EndeavourOS (Arch) compared to Fedora overall though... I will stay on Arch from now on.

reply
.NET does have flags to include the necessary dependencies with the executable these days so you can just run the .exe and don't need to install .net on the host machine. Granted that does increase the size of the app (not to mention adding shitton of dll's if you don't build as single executable) but this at least is a solved problem.
reply
They do now, after .net core and several other iterations. You'll also be shipping a huge executable compared to a clr linked .net app (which can be surprisingly small).
reply
>Toolchains on linux are not clear from dependency hell either - ever install an npm package that needs cmake underneath?

That seems more a property of npm dependency management than linux dependency management.

To play devil's advocate, the reason npm dependency management is so much worse than kernel/os management, is because their scope is much bigger, 100x more package, each package smaller, super deep dependency chains. OS package managers like apt/yum prioritize stability more and have a different process.

reply
> Toolchains on linux are not clear from dependency hell either - ever install an npm package.

That's where I stopped.

Toolchains on linux distributions with adults running packaging are just fine.

Toolchains for $hotlanguage where the project leaders insist on reinventing the packaging game, are not fine.

I once again state these languages need to give up the NIH and pay someone mature and responsible to maintain packaging.

reply
The counterpoint of this is Linux distros trying to resolve all global dependencies into a one-size-fits-nothing solution - with every package having several dozen patches trying to make a brand-new application release work with a decade-old release of libfoobar. They are trying to fit a square peg into a round hole and act surprised when it doesn't fit.

And when it inevitably leads to all kinds of weird issues the packagers of course can't be reached for support, so users end up harassing the upstream maintainer about their "shitty broken application" and demanding they fix it.

Sure, the various language toolchains suck, but so do those of Linux distros. There's a reason all-in-one packaging solutions like Docker, AppImage, Flatpak, and Snap have gotten so popular, you know?

reply
> The counterpoint of this is Linux distros trying to resolve all global dependencies into a one-size-fits-nothing solution - with every package having several dozen patches trying to make a brand-new application release work with a decade-old release of libfoobar. They are trying to fit a square peg into a round hole and act surprised when it doesn't fit.

This is only the case for debian and derivatives, lol. Rolling-release distributions do not have this problem. This is why most of the new distributions coming out are arch linux based.

reply
I'm going to need a source for both of those claims.
reply
It sure sounds very Debian-ish, at least. I’m a Fedora user, and Fedora stays veeeery close to upstream. It’s not rolling, but is very vanilla.
reply
Agreed, but I don't think that has to do with either it's "vanillaness" or the 6 month release schedule. Fedora does a lot of compatibility work behind the scenes that distros not backed by a large company more than likely couldn't afford.
reply
The real kicker is when old languages also fall for this trap. The latest I'm aware of is GHC, which decided to invent it's own build system and install script. I don't begrudge them from moving away from Make, but they could have used something already established.
reply
> python in another realm here as well

uv has more of less solved this (thank god). Night and day difference from Pip (or any of the other attempts to fix it honestly).

At this point they should just deprecate Pip.

reply
I have never experienced issues with pip, and I’m not sure it’s whether I’m doing something that pip directly supports and avoiding things it doesn’t help with.

I’d really love to understand why people get so mad about pip they end up writing a new tool to do more or less the same thing.

reply
Ah yes let's all depend on some startup that will surely change the license at some point.
reply
pip is really bad though so UV has a long ways to fall before you aren't still net better off :^)
reply
Very clearly a better option than continuing to use Pip. Even if they do change the license in a few years I will definitely take several years of not being shat on by Pip over the comparatively minor inconvenience of having to switch to an open fork of uv when they rug-pull. If they ever do.

Continuing to use Pip because Astral might stop maintaining uv in future is stupidly masochistic.

reply