The thing to complain about is if the version in testing is ancient.
FWIW the fixes referenced here are already fixed in trixie: https://security-tracker.debian.org/tracker/source-package/d...
That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?
And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.
On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
That's not what it's about.
What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.
They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.
But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.
So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
> And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.
Having that sprung on you because you decided to run everything on latest is worse.
"Oh we have CVE, we now need to uproot everything because new version that fixes it also changed shit"
With release every year or two you can *plan* for it. You are not forced into it as with "rolling" releases because with rolling you NEED to take in new features together with bugfixes, but with Debian-like release cycle you can do it system by system when new version comes up and the "old" one still gets security fixes so you're not instantly screwed.
> So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.
It exists in that format because people are running businesses bigger than "a man with a webpage deployed off master every few days"
Updated what, specifically in production?
If you need a newer version of Python or Postgres or whatever it is possible to install it from third-party repos or compile from source yourself. But having a team of folks watch all the other code out there is a load off my plate: not worrying about libc, or OpenSSH, or OpenSSL, or zlib, or a thousand other dependencies. If I need the latest version for a particular service I would install that separately, but otherwise the whole point of a 'packagized' system is to let other folks worry about those things.
> So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.
I've done in-place upgrades of Debian from version 5 to 11 at my last job on many machines, never once re-installing from scratch, and they've all gone fine.
Further, when updates come down from the Debian repos I don't worry about applying them because I know there's not going to be weird changes in behaviour: I'm more confident in deploying things like security updates because the new .deb files have very focused changes.
One is security updates and bug fixes. These need to fix the problem with the smallest change to minimize the amount of possible breakage, because the code is already vulnerable/broken in production and needs to be updated right now. These are the updates stable gets.
The other is changes and additions. They're both more likely to break things and less important to move into production the same day they become public.
You don't have to wait until testing is released as stable to run it in your test environment. You can find out about the changes the next release will have immediately, in the test environment, and thereby have plenty of time to address any issues before those changes move into production.
That's where you're wrong. They're not one and the same.
Debian stable often defers non-security bug fixes for up to two years by playing this game.
I'm not interested in new features unless they make things actually work.
Debian stable time and again favors broken over new. Broken kernels, broken packages. At least they're stable in their brokenness.
Hence my complaint.
But I have noticed far more broken in distro that DOES backport features, RHEL/Centos. So many that we migrated away from it, when they backported a driver bug into centos 5 and then did the same backport of a bug for centos 6.
Also rebuilding package is trivial if you don't agree with what should and should not go into stable version
But two years is impractical and Debian gets a ton of friction over it. Web browsers and maybe one or two other packages are able to carve out exceptions, because those packages are big enough for the rules to bend and no one can argue with a straight face that Debian is going to somehow muster up the manpower to do backports right.
But for everyone else who has to deal with Debian shipping ancient dependencies or upstream package maintainers who are expected to deal with bug reports from ancient versions is expected to just suck it up, because no one else is big enough and organized enough to say "hey, it's 2026, we have better ways and this has gotten nutty".
Maybe the new influx of LLM discovered security vulnerabilities will start to change the conversation, I'm curious how it'll play out.
They are not expected to deal with this. This is the responsibility of the Debian package maintainer.
If you (as an upstream) licensed your software in a manner that allows Debian to do what it does, and they do this to serve their users who actually want that, you are wrong to then complain about it.
If you don't want this, don't license your software like that, and Debian and their users will use some other software instead.
I think you need to chill out. Relicensing the way you suggest would be _quite_ the hostile act, and I'm not going to that either. But I am an engineer, so of course I'm going to talk about engineering best practices when it comes up.
You don't have to take it as an attack on your favorite distro - that really does pee in the pool of the upstream/downstream relationship between distros and their upstream.
The trouble is you seem to be assuming that best practices for you, in your opinion, also apply to everyone else. They don't. Not everyone sees things the way you do or is facing the same issues or is making the same set of tradeoffs. There are downsides to what debian does but there are also upsides.
At this point, given the plethora of high quality options available as well as how easy it is to mix and match them on the same system thanks to container-related utilities and common practices I really don't think there's any room for someone who doesn't like the debian model (ie in general, as opposed to targeted objections) to complain about how they do things. If you want cutting edge userspace on debian stable at this point you have at least 3 options between nix, guix, and gentoo. There's also flatpak and snap which come built in.
I wager it's only a matter of time before we see a mass rooting event that hits Debian hard while everyone running something more modern has already been patched.
I think that might be what cuts down on the grandstanding about "freedoms" and "that's how we've always done things". You certainly are, up until it becomes a public nuisance.
Why would you expect LLMs not to be simultaneously leveraged to catch backports that were missed or inadvertently broken?
Given recent headlines I think it's far more likely that we see a mass rooting event hit one or more of the bleeding edge rolling release distros or language ecosystems due to supply chain compromise. Running slightly out of date software has never been more attractive.
OpenBSD in particular can use competent developers to fix their dogshit filesystem.
I assure you, enormous sums of people prefer Debian the way it is. I do not, ever, want "new stuff" in stable. I have better things to do than fight daily change in a distro, it's beyond a waste of time and just silly.
If you want new things, leave stable alone, and just run Debian testing! It updates all the time, and is still more stable than most other distros.
Debian is the way it is on purpose, it is not a mistake, not left over reasoning, and nothing you said seems relevant in this regard.
For example, there is no better way than backporting, when it comes to maintaining compatibility. And that's what many people want.
Doing terrible work every 2 years is better than doing it every day?
LetsEncrypt has been a great example of this in certificate management.
And by skipping some releases, you will have less of that work. When something is changed in one release, then changed again on the next one, by waiting you only have to do the change once, instead of twice. And sometimes you don't even have to do anything, when something is introduced in one release and reverted in the next one.
If you want the rolling release like distro, just run debian unstable. That's what you get. It's on par with all the other constantly updated distros out there. Or just run one of those.
Also, Debian stable has a lifetime a lot longer than 2 years, see https://www.debian.org/releases/. Some of us need distros like stable, because we are in giant orgs that are overworked and have long release cycles. Our users want stuff to "just work" and stable promises if X worked at release, it will keep working until we stop support. You don't add new features to a stable release.
From a personal perspective: Debian Stable is for your grandparents or young children. You install Stable, turn on auto-update and every 5-ish years you spend a day upgrading them to the next stable release. Then you spend a week or two helping them through all the new changes and then you have minimal support calls from them for 5-ish years. If you handed them a rolling release or Debian unstable, you'd have constant support calls.
Personally, If the hardware is working great, seems like a waste of money replacing it, just to upgrade software. Especially with Debian oldstable -> Debian stable where it's usually quite easy and painless.
The problem with this take is that it’s stuck in the early 2000’s, where all servers are pets to be cared for and lovingly updated in place.
It’s also circular: you have the same problem with the current model if you don’t have a test environment. And if you do have a test environment, releases can be tested and validated at a much higher cadence.
Debian patches defaults in OpenSSH code so it behaves differently than upstream.
They shouldn't legally be allowed to call it OpenSSH, let alone lecture people about it.
Let them call their fork DebSSH, like they have to do with "IceWeasel" and all the other nonsense they mire themselves into.
When you break software to the point you change how it behaves you shouldn't be allowed to use the same name.
The automatically tested Debian release is called Debian Testing. And it is stable enough.
Debian Stable is basically "we target particular release with our dependencies instead of requiring customer to update entire system together with our software". That model works just fine as long as you don't go too far back.
> On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.
Narrator: It turned out things were not getting worse, they were just fine.
> We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.
That project is RedHat, not Debian, they backport entire features back to old versions (together with bugs!)
Some people will even run Debian on the desktop. I would never, but some people get real upset when anything changes.
Debian does regularly bring newer versions of software: they release about every two years. If you want the latest and greatest Debian experience, upgrade Debian on week one.
From your description, you seem to want Arch but made by Debian?
Isn't that essentially Debian unstable (with potentially experimental enabled)? I've been running Debian unstable on my desktops for something like 20 years.
But that does nothing for people who write and support code Debian wants to ship - packaging code badly can create a real mess for upstream.
And despise the name is probably more stable than vast majority of rolling release distros
For what you want, there are other distributions for that. Debian also has stable-backports that does what you want.
No need to rage on distributions that also provide exactly what their users want.
Don't get me wrong, I use and encourage extensive automated testing. However only extensive manual testing by people looking for things that are "weird" can really find all bugs. (though it remains to be seen what AI can do - I'm not holding my breath)
I use Arch on my laptop, when I got it 2 years ago the amd gpu was a bit new so it was prudent to get the latest kernel, mesa, everything. Since I use it daily it's not bad to update weekly and keep on top of occasional config migrations.
I use Debian stable on my home server, it's been in-place upgraded 4-ish times over 10 years. I can install weekly updates without worrying about config updates and such. I set up most stuff I wanted many years ago, and haven't really wanted new features since, though I have installed tailscale and jellyfin from their separate debian package repos so they are very current. It does the same jobs I wanted it to do 8 years ago, with super low maintenance.
But if you don't want Debian stable, that's fine. Just let others enjoy it.
Nowadays, even with Ubuntu’s two year or so release cycle I have to use 3rd party packages to have up to date software (PHP being one) and not some version from three years ago.
We no longer live in a world (with few exceptions) where running a 3-5 year old distribution (still supported) makes sense.
If I was to run dnsmasq on Debian, it would be in a container. Since I run Pihole (in a container), it kinda is.
https://security-tracker.debian.org/tracker/CVE-2026-2291
https://security-tracker.debian.org/tracker/CVE-2026-4890
https://security-tracker.debian.org/tracker/CVE-2026-4891
https://security-tracker.debian.org/tracker/CVE-2026-4892
https://security-tracker.debian.org/tracker/CVE-2026-4893
https://security-tracker.debian.org/tracker/CVE-2026-5172
fixed, fixed, fixed, fixed, fixed and fixedIrrelevant strawman, since you're not accusing the dnsmasq package in Debian stable of being straight-up broken.