upvote
>the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.

Not "separately to every single downstream", there is the "linux-distros" mailing list for disclosures: https://oss-security.openwall.org/wiki/mailing-lists/distros

This random blogpost from 2022 serves as a proof that disclosing kernel vulnerabilities to the distros list is a well-known practice: https://sam4k.com/a-dummys-guide-to-disclosing-linux-kernel-...

I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.

reply
It is literally not the vulnerability researcher's problem to solve or address this.
reply
Brother, it is a simple email to a mailing list.

They are professional security researchers, they must know this is the way it is done in the ecosystem.

Kicking the can around leads nowhere.

reply
Have you considered that maybe it’s not the way it’s done?

It’s certainly a thing some people do. But there is not a unified consensus on how to handle vulnerabilities. Different security researchers (or, in fact, the same researchers releasing different findings) can and do take many different courses of action.

reply
That is just being pedantic. Why did they absolutely need to release this into the wild now? Why couldn’t they have waited?

“30 days should be enough time” why? Why is 30 days a magic number? Especially in open source.

Yeah it isn’t the researchers problem to tell every distributor of the kernel about the fix or verify that everyone has the fix, but fuck maybe wait until at least someone has the fix and maybe don’t drop it on a Friday. That is just malicious

reply
They didn’t release anything into the wild. It existed. The irresponsible thing would be letting it keep existing without telling anyone.
reply
You cannot deny that telling the entire world about this vulnerability before it is patched won't cause a lot of abuse that would not have happened otherwise.
reply
Why not?
reply
What number of days do you want? If nobody tells the distros it could be months or years, and while it would be nice for the researchers to monitor/notify distros it's really not their job. They might not have thought of it.

And they dropped it on a Wednesday.

reply
Agree, but then where does the accountability lie? Presumably with the kernel maintainers themselves, correct? SOMEONE dropped the ball here. If we can't point the finger correctly, that seems like a problem in of itself.
reply
It looks like the expected thing happened.

The kernel devs patched the kernel. The kernel devs have a pretty known, straightforward stance in how they ship fixes for anything, because anything in the kernel can be a security problem.

Distro maintainers can see kernel changes. Some distros aggressively track new changes. Others backport what they feel are relevant. Others don’t do either.

Users pick what distro they use, and how they set up their infra.

Maybe if I were paying for RHEL licenses I’d be eyeballing the money I pay and RHEL’s response time.

But the ownership here lies with system operators, who pick their infrastructure, who design their security model, and who build their operational workflows. This vuln is a great example: people who looked at shared untrusted workloads on a single kernel and said “Hell no” had a much calmer day than teams who thought that was a good idea.

reply
The fact that you had to take a whole paragraph to explain the contortionist arrival at something that isn't even really super clear after you explained it (you kinda pointed the finger both at end users and at distro maintainers simultaneously) and essentially boils down to "well, you as the end user need to be following kernel CVE's and can't trust distro maintainers to do it" does in fact indicate that there is a deeper issue at play here. You might say "well, there's no implicit chain of trust here". You might be right, but is that really the most effective way of doing things? Of course Linux is Use it at your Own Risk, but is there not a concept of "we as a collective community should get together and try not to drop the ball on some serious shit?"

In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.

reply
> In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.

Who decides who is a trustworthy distro maintainer? In the open source world everyone is equal, no favorites are chosen. If your point is that the distros backed by companies making at least $x million revenue a year should get priority disclosure... pretty sure somebody will take issue with this.

And it's not like a hypothetical issue either. Given the high stakes, bad actors are highly incentivized to masquerade as some small scale niche distro until they get their effectively free zero day CVE.

reply
To be more blunt: if you’re paying for a product, the vendor owes you whatever things they committed to. If you’re a Redhat customer and your agreed SLA with Redhat for this kind of security fix was passed by, go be mad at Redhat. (I don’t think Redhat is bad here, they’re just the vendor most known for a commercial offering from the lists here. I would say the same thing about Ubuntu Pro)

Otherwise, it’s on the end user. Distro volunteers don’t owe you anything. Kernel devs don’t owe you anything.

I don’t care about what would be the most effective way of doing things. I care about what folks involved actually owe to each other, and distro volunteers don’t owe users any kind of active chasing of remediation due to the user’s threat model.

The idea of making some kind of streamlined process that solves what you didn’t like about this vulnerability’s remediation is that it ignores basically all the complexity. Like “what about distros that don’t abide by embargoes” or “what distros count as ones that matter” or “what about all the vulns that aren’t in Linux, they’re in software that’s packaged across many operating systems”.

reply
Agree on this so hard. Why does everyone expect instant patches and SLA-like infrastructure from unpaid volunteers?

If you want that, buy a commercial distro of linux, or use Windows. That's a huge part of Microsoft's value proposition to enterprise - they pay people to stay on top of security patches for you. Same with RedHat and others.

Expecting anything of unpaid volunteers is unreasonable.

> THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

reply
Right, you’re saying “system is working as designed”, and I’m agreeing, but I’m saying “the system as designed kind of sucks, how can we make it better”?
reply
I disagree that it sucks. It leverages a ton of people putting in their time and resources, and relies on system operators being active participants.

This vulnerability is, for some threat models, a really big deal. A security group found the vulnerability. They disclosed it. It was patched.

Folks here have gotten all kinds of bent out of shape that the groups involved didnt do things in the way each internet commenter would have liked. But this is the system working.

reply
> This vulnerability is, for some threat models, a really big deal.

This vulnerability is, for other threat models, a death sentence.

> A security group found the vulnerability. They disclosed it. It was patched.

It was patched only after some people who should have been notified well in advance happened to notice something was up. That is NOT HOW IT'S SUPPOSED TO WORK.

For as long as the unpatched window remains open, skids will mess around and break things. Organized crime teams will use it for some really nasty hacking/ransomware/exfil/extortion/whatever. I guarantee you, this vuln is powerful and widespread enough that intel orgs will use it to kill targets, if they haven't already been using it for years. And if they have, we can just bank on them pulling out all the stops to take advantage of the remaining time for wreaking havoc. Make a project out of it and see if you can guess some of the future headlines.

Certain folks might not care much because they are citizens of one or more of those orgs' nations, so those targets are welcome to die in their opinion. That's fine. You do you, I'll do me, we'll all just go on doing our thing. But it's all fun and games until the wrong target gets hit and now there's a pact between the Germans and the Austrians being invoked and a few dozen million Europeans die. Or a geopolitical hotspot flares up and overnight 20% of the global petroleum supply chain grinds to a halt. Use your imagination. This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.

How is this different from any other day? Because now we've got a world-changing vuln out in the wild with no distro mitigation on day 1, and who the hell knows how many unscrupulous actors poised to take advantage of it before the fun and games stops. There will be no adults in the room when the miscreants decide to deploy while they still can.

Is this vuln going to start the next world war? Probably not. I don't expect it to and I hope and pray it doesn't. But leaving a vuln like this undisclosed to the very people whose job it is to protect us all is playing with fire. Not matches; more like a 10-grams-less-than-critical mass of plutonium.

sam is right to be pissed and he's doing a very good job of hiding it, because he knows that his users are at the mercy of TPTB in the Linux kernel world. Somebody's head needs to roll for this, and I don't mean some dude the CIA wants to hax0r because he's next on the list.

reply
> This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.

A Linux LPE is a nothingburger unless you’re relying on the Linux kernel to enforce internal security boundaries, which would simply be foolish.

reply
The PoC exploit code in python (3.10+) fits comfortably in 1k bytes. An unminified version that works for even older versions of python is just a hair under a 1500 byte packet payload, modulo headers for your preferred method of delivery. I can only guess how much it could be shrunk down to only the shellcode.

Now, y'all tell me, since I'm not a web guy. How hard is it going to be to tweak this lovely little pathogen into some kind of browser exploit? It just needs to be combined with a sandbox escape to work on current versions, right? Difficult but quite worth investing the time and effort to develop if that's your line of business. If that happens, every at-risk Tails user is going to have to stay offline for a while, unless they want to play the drone lottery.

Or how about chaining it with any of the as-yet unpatched bugs in gawd-only-knows how many web services out there that have poor input sanitization code? That bug now graduates from a DoS crash causer to a root grab. Good luck stopping it with your fancy AI Behavioral Analysis security tools. They better be fast. The sploit is going to do its work in two packets, maybe three. Fun times.

Lucky for us systems monkeys, it's not like anybody is spending billions of dollars to develop vuln finding AI tools right at this very second. So there shouldn't be many unpatched web services holes.

Oh, wait.

Of course, as the grey hats can already tell you, the really delicious part of this thing is how it's going to become the LPE tool of first resort for any APT that's already inside ur base killin ur doodz.

Nothingburger? This nothingburger is going to root a million OS instances before we know what hit us.

reply
I think you’re reading a ton into this vulnerability that is not there.
reply
Start a distro with your preferred upstream tracking policy.
reply
Is that the only option here? It’s certainly being framed as such.
reply
Fwiw, I'm completely with you on this. The folks you're communicating with seem utterly miserable, and don't seem to be communicating in good faith.

Not sure what the solution could/should be, but surely there could be a better, easier mechanism for kernel to advise all distro maintainers who care, and for those distro maintainers to subscribe in some way. Whether any distro maintainers do so (let alone do something about the vuln notifications) would be entirely up to them. There could also be some easier way for end users to see what the distros' policies on this are, such that they can take that into account when selecting a distro.

reply
It seems odd to call me utterly miserable and then suggest I’m not communicating in good faith.

We don’t have to agree, but the site rules are pretty clear that swipes like that aren’t ok.

That kind of distro maintainers and kernel devs communication path already exists: the linux-distros@ mailing list. But since anybody can read it, posting “hey everybody, this is a security patch” has basically the same effect as the security researcher posting, in terms of disclosing the vuln to bad actors.

Given that anybody can make a Linux distro, and Linux distros aren’t generally either capable or interested in background checking their teams or policing their individual security practice, it doesn’t seem possible to have a communication channel that distros can sign up for that lacks this problem.

reply
The person I was defending NEVER suggested that extra burden should be put on anyone. Just that there ought to be some system (even if imperfect)to make it easy for everyone (or, if not everyone, at least a select group - eg the main distros). But you and others kept saying that they were trying to put burden on various parties. That's the poor faith.
reply
How do you get a system without somebody (or multiple somebodies) being responsible for it?
reply
Just as a purely intellectual exercise, what changes about this if we leave aside ideas of "owe," "deserve ," and "earn?"

There's not really an enforcement mechanism in FOSS like there is in capitalism world, it just comes down to what we want our part of the world to look like. So I think we'd think more clearly if we leave aside the ideas like "who owes who what." I think it's fun to imagine what sort of motivations and incentives there are if we put away the money ones.

reply
"leave aside ideas of "owe," "deserve ," and "earn?""

Nonsensical string of words with no meaning.

If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow. Feel free to idk pay someone to track the kernel list and 4000 others and send you heads-ups? Try to pass a law to make people do what you want since you don't care about words like "owe"?

reply
> If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow.

Yes, exactly, the opposite of paying, since when you pay someone something they owe you whatever you paid for.

If we leave aside owe, deserve, and earn, we can start discussing things like what we want our kernel ecosystem to look like, how we can make it safer, etc, without being burdened by these concepts.

It's a simple intellectual exercise, that's all. If you're having a strong reaction to it, imo that'd make it even more fun for you to participate.

reply
But there was no intellectual excercise. Only a complaint with no proposal.

You want someone to do something for you for some other reason than that they owe you.

They already are doing something for you that they don't owe you. They are writing software that you benefit from. You just want them (or somebody) to do something else that they don't owe you.

They aren't, because they don't owe you and it's not something they want to do for fun, and so since the problem is they don't owe you, you wish to set aside words like "owe".

Well sure. Looks like you found the problem and the solution alright. Why didn't anyone else think of that?

reply
I don't feel like I'm complaining, I feel like I'm asking how else someone would frame it without leaning on the concepts mentioned. What changes about the dynamic then?
reply
But what does that mean? "owe" is just shorthand for the concept of obligation. For someone to do something, they need a reason to do it. It doesn't have to be a transaction but there does need to be some reason.

If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?

Do you imagine say a dating website where people compete to look attractive by getting points by doing the best job at finding the most bugs and patches and reporting them to the most downstream consumers the fastest?

reply
> For someone to do something, they need a reason to do it. It doesn't have to be a transaction but there does need to be some reason.

Exactly! That's what I'm interested in exploring.

> If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?

That's what I love exploring. Action with no obligation. Have you any examples of that in your life? Nobody obligates me to do the long walks I enjoy where I stick a 360 camera on my head and then upload the footage to Mapillary and other open platforms, I just like to do it, and I want to find other things that I'm motivated to do without obligation, and I'm fascinated by things people do for "no reason." Understanding human motivation is really important to me for some reason.

As to "what then," yes what then? If I run a cashless commune, how do we make sure the toilets get cleaned? That's the whole question, and I love exploring it. If you'd like to experience it yourself, you could always try attending a regional Burn for a bit of a micro version of it, people doing things just for the sake of it.

I'm sorry, I don't quite understand what you mean by the dating app thing.

reply
The real advantage of Microsoft is that there is someone you can sue!

Linux like every open source project is just a bunch of people who are YOLOing it. Not something you use for your fortune 500 critical mission infrastructure.

reply
> Others backport what they feel are relevant.

But from what I understand they were not given enough information to know if it was relevant or not. The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.

reply
> The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.

If the commit message says it fixes a security bug, then bad actors immediately know there's a possible exploit there. So maybe it's intentional? (not familiar with the policy for this)

reply
Then we’re back to the initial problem. How can you fix and then communicate to downstream about security vulnerabilities without exposing those vulnerabilities in an open source project? If you want to reach all your possible users you have to disclose the vulnerability.
reply
The accountability fundamentally lies with the distro maintainers. They're the ones shipping a "product". Either they need to get agreements in place for advance notice, or correctly set expectations with their users that they won't get advanced notice.

They dropped the ball when the shipped supposedly secure systems where their method for getting alerted to security updates was "hope people reporting to upstream will also notice a mailing list that will alert them".

(Caveat: Distro's like Ubuntu advertise security updates so this is on them. I'm not sure Gentoo does that, if they don't well then no one dropped the ball because no one represented that Gentoo got prompt security updates).

reply
All it takes is to be part of the Kernel security team. I am surprised that many commercial strong distributors just not care enough to join the Kernel security team. Hopefully a valuable lesson was learned and fixes are applied.
reply
The distros dropped the ball. imho. One of the (main) tasks of the distro is watching the changed of you upstream packages for important changes. This is slightly complicated by the fact that the linux kernel considers all bugfixes security fixes, so it's quite a lot to read it all. But that's life. The kernel developers are not wrong as it's nearly impossible to be sure a bug in the kernel is not (also) a security problem.
reply
The patch wasn't even listed as fixing a bug.

"There is no benefit in operating in-place in algif_aead since the source and destination come from different mappings. Get rid of all the complexity added for in-place operation and just copy the AD directly."

reply
If you just want to get a bug fixed that annoys you, it's of course out of scope.

If researchers want to showcase their ability (either individually or as an organization) to identify and address security vulnerabilities in complex multi-stakeholder environments, I very much expect them to figure this out. After all, it doesn't make much sense if a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions, so that identified issues are resolved with minimal impact to the business.

reply
> a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions

These vendor interactions you're referring to are the company's customers, correct? Are you proposing the company hire another company to manage getting updates to their customers?

reply
If they get enough time to build a website with a fancy logo instead, one might however question where their priorities are.
reply
I'd imagine it's not that they lacked the time to email linux-distros, but that they were unaware they were supposed to do so.

Feels like the more sensible process would be for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability and for distro maintainers to pay attention to that. Could be done without revealing what the vulnerability actually is in most cases, trusting the kernel maintainer's judgement. There does seem to be a public linux-cve-announce mailing list.

reply
Why is it the job of the kernel to notify the distros? Why isn't it the job of the distros to keep up on upstream security disclosures?

Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.

People have gotten so used to the Github flavour of free-labour, social-network-style FOSS that they've forgotten what all those LICENSE files actually say, which is to make it explicitly clear that the devs are not responsible to you for your issues, up to and including the software setting your house on fire. If you don't like it, you don't have to use it.

reply
> Why isn't it the job of the distros to keep up on upstream security disclosures?

They can't, because (responsible) security disclosures are private, _not public_. That's the whole point of the system: notify the developers in private ahead of time (usually 30, 60 or 90 days) so they can write, test and roll-out the fixes before you release the info to the whole world. This is to minimize the time between when bad actors gain access to the exploits vs. when users install the patch. So "keeping up on security disclosures" cannot ever be a 'pull' process.

Usually the maintainers of the big distros are part of (private) security mailinglists and receive such info. Just not in this case it seems.

reply
It would be best if distros kept tap on kernel changes and update as soon as possible when they see a security issue fixed.

Sending emails to some big distros would still result with e.g. Gentoo not getting that info because they are not a big distro.

reply
The problem is that the kernel devs (correctly imo) consider all bugfixes security fixes. So the distros need to decide for themselves which ones are important enough to warrant an update. Apparently this one had a quite unclear commit message, so it importance was missed.

Not ideal, but also: shit happens? It's always a balancing act choosing the lesser of multiple evils and most of the time it seems to work ok-ish, which is probably the best we can hope for ;-P

reply
The kernel maintainers don't flag "security fixes" as special, and they have a well-thought-out reason for that, see many other comments in this thread.
reply
That, and they flag pretty much any random patch with a CVE these days, making it harder for distro maintainers to keep up.

For this specific "bug" they took care to not mention any security angle in the commit message, making it extremely hard for an outsider to even realize this was a critical patch. I assume this was because they wanted to push the fix without breaking embargo.

reply
Where do you suggest they should have kept up on this disclosure?
reply
deleted
reply
> Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.

The post you are responding to says that it would be nice if they copied literally one mailing list.

reply
> a notification should have gone out from the kernel team to a curated list of distro security folk

Who would curate that list though? You don't need permission from the kernel team to spin up a new distro. I can go and create fork of Debian or Arch or whatever today and the kernel team would never know (and neither should they).

This is completely in the responsibility of the distros. If you don't like this model, use something like FreeBSD.

reply
Sounds like a job for the Linux Foundation maybe?

You don't need anyone's permission to make a distro, that's true, but if you notify Debian, Canonical, Fedora, Red Hat and Arch you're covering a very large fraction of users; way more than today's 0%. In cases like this, perfect is the enemy of the good.

reply
The Linux Foundation hasn't been about Linux (except marginally) in a long while, if ever.

The name is a misnomer.

reply
A rogue actor may create a new distro, maybe for some niche use case such as accessibility or retro gaming. After acquiring enough false (and even some real) users that the Linux Foundation accepts them as a notifiable distro maintainer, this maintainer could then pwn machines before the exploit is made public.
reply
I didn't say all distros should be notified, for that exact reason. I listed a handful of major fistros.
reply
Who gets to decide who the lucky few are?
reply
Sounds like a job for the Linux Foundation maybe?
reply
Human beings
reply
Qualified by what?
reply
Are you implying it requires expertise to figure out the ten (plus or minus a factor of two) biggest distros? I think most people that understand the context of the question can figure out pretty similar lists.
reply
Rather than the current situation, where they can pwn machines after the exploit is made public?
reply
Yes. After the exploit is made public, the window of opportunity closes quickly.
reply
Uh, there is a list, named "linux-distros", which is for this purpose (and I think it's for more than just Linux, e.g. I believe it was used for the xz vuln).

Given this was announced when backports weren't ready (and given the POC was at least opaque if not obfuscated), I'm getting the vibe fixing the vuln wasn't as high as a priority as making a media splash.

reply
From TFA:

> Note that for Linux kernel vulnerabilities, unless the reporter chooses > to bring it to the linux-distros ML, there is no heads-up to > distributions.

so, no, `linux-distros` list don't solve the problem.

reply
The impacted user count of your debian fork with custom compiled kernel would probably not be more than 1 however.
reply
> they are in a much better position to coordinate and communicate with the maintainers than random reporters are.

They openly refuse to do this and have been given authority by MITRE to work against any such process.

reply
right, which is why it is confusing that the animosity is aimed at the reporters rather than the kernel security team.
reply
I think both parties share some blame here.
reply
Not really confusing. Linux is a sacred cow.

There would be a lot of people gloating if this happened to MS.

reply
Microsoft has a long and sordid history of cheerfully doing anything they can to fuck everyone over just to make a few more percentage points of profit.

Linux is a free kernel that literally revolutionized the computing landscape.

reply
Yes, this is the sacred cow status being referred to.
reply
It might be a scared cow, but at least deservedly so. There is imho a difference between accidental incompetence (debatable, even) and active malice. Microsoft has done a lot of the latter so gets bashed more, nor surprise there.
reply
"You keep using that word..." or term in this case.

There are only 2 words in this term, and neither one even slightly applies.

A sacred cow is called a sacred cow because there is no reason for it to be sacred.

Linux is perfectly subject to criticism, and so not at all sacred.

Linux has earned a stunning amount of respect and gratitude by actually providing stunning utility and quality. IE, it's not just a random object like a cow that everyone decided to worship for no reason.

Spoken as a freebsd user who has plenty of critiques of the entire linux ecosystem.

reply
> Linux has earned a stunning amount of respect and gratitude by actually providing stunning utility and quality. IE, it's not just a random object like a cow that everyone decided to worship for no reason.

I agree.

> A sacred cow is called a sacred cow because there is no reason for it to be sacred.

Here we diverge. Linux earns sacred cow status when people interpret legitimate criticism of it as an attack that must be debunked or dismissed. And there's plenty of that happening in this forum; you may not be treating it as a sacred cow, but plenty of people are.

And to expound on why it even matters, it does a disservice to Linux to treat it this way: if you can't engage with its flaws, you'll never help fix them, and instead attack people who try.

reply
What process? Wasn't the default state of things to just let any random person of the street spam vulnerability reports without validation or quality control?
reply
deleted
reply
> the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.

It's 2026. We're more than 30 years into the Linux ecosystem. I don't believe this bullshit for a moment.

Given how trivially users can implement mitigation, distributions could have done _something_ to protect their users prior to publication date. A handful of messages is all that was required, not "every single downstream" - that is a straw man.

The publication of a bug that trivially gains root on an incredible number of Linux installs that was discovered using an A.I. tool prior to any of the "downstreams" implementing a fix is intentional. I speculate the motivation is free promotion of the A.I. tool.

reply
Two things can be true simultaneously: the Linux kernel ecosystem should have done better at communicating this to their downstreams, and publicly sharing the exploit was irresponsible.

It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.

reply
No, this was already timed disclosure. This is very common and widely accepted. 90+30 is what Google Project Zero uses, for example. The security researcher has met their ethical requirements already. This is entirely on the kernel's security team for failure to communicate downstream. That is their responsibility.

The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.

reply
I'm not advocating for delaying the disclosure at all; my point is, if you see your initial disclosure to the kernel didn't go anywhere, to be responsible is to put in a little extra effort to ensure the fix is picked up before you disclose.
reply
"Didn't go anywhere"? The kernel devs patched it! They patched it weeks ago! The kernel security team needs to communicate security problems in their own releases, because that is where the distros are already looking.

Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.

reply
The kernel devs patched it! They patched it weeks ago

FTFA:

> I see that on the 11th of April 6.19.12 & 6.18.22 were released with the fix backported.

> Longterm 6.12, 6.6, 6.1, 5.15, 5.10 have not received the fix and I don't see anything in the upstream stable queues yet as I write.

I wouldn't go so far as to call this "the kernel devs patched it". Virtually none of the kernels that distro's are actually using today have received a fix. This looks like an extremely lackluster response from the kernel security team.

Pretty much the only non-rolling distro's that are shipping a fixed kernel are Fedora 44 and Ubuntu 26.04, both released in the last few weeks. Their previous releases both shipped with Linux 6.17 which is still vulnerable today!

reply
None of this impacts disclosure norms. One important reason the clock starts ticking faster once any patch lands is that for serious attackers, the patch discloses the vulnerability. That's quadruply so in 2026, when many orgs are automatically pumping Linux patches through LLM pipelines to qualify them for exploitability.

But it's been at least 15 years since "reversing means patches are effectively disclosures legible mostly to attackers" became a norm in software security. And that was for closed-source software (most notably Windows). The norms are even laxer for open source.

reply
I'm on Fedora 43 and tried to hack myself with the python script. It didn't work on kernel 6.19.12-200.fc43.x86_64 which has a build date of April 12, 2026
reply
In the airless void of a message board thread, of course they should. What does it cost a commenter to demand that?
reply
> Should a security researcher that identifies a vulnerability in electron.js need to identify _every_ possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.

But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.

It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?

reply
They embargoed their vulnerability for 30 days after Linux landed a kernel patch. They did their part. You will always be able to come up with other things they could do for you, and they will always at first blush sound reasonable because of how big and important Linux is, but none of those things will be responsibilities of the vulnerability researcher. Their job is to bring information to light, not to manage downstreams.

About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.

reply
I realize you've been championing this idea in the thread, and I admire it because I also recognize the misdirected blame. Please understand I do not harbor "blame" for the researchers.

> Their job is to bring information to light, not to manage downstreams.

The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.

reply
I strongly disagree. I want the information. I don't want to wait longer to find out about critical vulnerabilities so that researchers can fully genuflect to whatever Linux distribution norms people on message boards have. Their "actions" were to disclose a vulnerability that already existed and was putting people at risk. It's an absolute good.

If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.

People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.

reply
I do get that, this era of automation is too responsive to not go public to provoke action. I think I might just be wistful of an era in which the alternate path might have made a difference. Sorry to pile on.
reply
You're not piling on and I'm glad to have the opportunity to expand on my point.
reply
>publicly sharing the exploit was irresponsible

they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).

whoever did the marketing on "responsible disclosure" was a genius.

tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."

reply
In my world, responsibility is not just checking a box of following industry practice. Responsibility, as Wikipedia puts it on their social responsibility page, is working together with others for the benefit of the community. And yes, sometimes that's a bit larger burden than would ideally be the case. It's an imperfect world, after all -- and let's not forget the disclosure as it happened also placed a larger burden than ideal on people scrambling to patch.

And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.

reply
No.

The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.

Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.

The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.

reply
The situation with e.g. BlueHammer is fundamentally different: there, the only party that could act on it (Microsoft) ignored them. In this case, the parties that could act on it weren't notified at all.

I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.

reply
Downstream vulnerability disclosure is a negotiation between the downstreams and the upstreams. It is not the job of a vulnerability researcher to map this out perfectly (or at all).
reply
Yes and that's why the current system where security researchers are expected to reach out to the distro mailing list is flawed and instead there should be a defined pipeline for the kernel security team to give a heads up.
reply
> The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile.

[citation needed]

Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro?

reply
>[citation needed]

there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of.

one decent start is https://projectzero.google/vulnerability-disclosure-faq.html...

"Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.

[...]

While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.

[...]

For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."

>Linux distros (specifically) act in this way

carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.

reply
Really?

Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.

This is the result of that failure.

If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.

reply
so what? we should never disclose anything? this will only result in companies suppressing disclosure and leaving vulnerabilities unpatched.
reply
If the maintainers were unresponsive, sure -- but it seems slightly hard to buy that a responsible reporter trying to make a big splash and a good impression wouldn't first check "did this make it out to the distros?" before making sysadmin's days real shitty, even if technically they could point fingers at other parties. At which point, if they're paying paying any attention at all to what they reported, they may have realized that a mistake was made.
reply
its an industry standard disclosure process. 90 days after reporting, or 30 days after the patch lands, the vuln is disclosed.

the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.

the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.

reply
The problem is that if you make too big of a deal about a particular patch, then someone just reverse engineers the vuln from the fix and your responsible disclosure period doesn't exist anymore.

Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.

reply
> Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.

How do you figure that? From what I could tell from the earlier post, the fix has only been backported to 6.18 and later, and as TFA indicates the distro's were not informed of the security implications of this fix. All distro's shipping a major kernel version from more than a year ago -- and that includes all LTS kernels -- are vulnerable, regardless of how "timely" their patch schedules follow upstream.

reply
you minimize this with the curated contact list.

the baddies are looking at every patch anyways.

reply
Yes, it's just incompetence from everyone involved, not malice. The company making the disclosure doesn't actually care, and the kernel processes are ineffective.
reply
No, it's incompetence from everyone involved except the company making the disclosure, which, despite the fact that the existing norms are not in fact binding (like people downthread seem to believe), they followed.
reply
deleted
reply
Really? It seems very odd to not check in on the status of the fixes, even if it's technically possible to pass the blame to other people.

Even if the only purpose of looking at the status to make yourself look good in marketing materials, it's surprising that it didn't happen.

reply
`it's technically possible to pass the blame to other people` presupposes that the blame belongs to the reporter unless effort is taken to "shift" it. This is just an inaccurate worldview as many people have pointed out clearly in this discussion. If there's a vulnerability in software the blame lies with people who wrote and maintain the software, not someone who finds and discloses a vulnerability. The person who should `check in on the status of the fixes` is the person who owns the thing being fixed, which is very much the kernel and distro maintainers and not the security researcher. It is you who are willfully shifting blame to an innocent party
reply
One of the reasons this unavoidable deadline was invented, is that the alternative is that one company (or all of them) can simply decide to ignore the vuln report, and then the vulnerability will stay forever undisclosed and forever out there in the wild. And prisoner's dilemma suggests that most companies would chose "do nothing" in this scenario: they don't have to do anything, and if the vuln stays undisclosed, it probably won't be exploited anyhow. Win-win!
reply
I'm confused. Can you explain how this applies to the current situation, where no vuln reports were submitted to the groups responsible for distributing patches?
reply
>where no vuln reports were submitted to the groups responsible for distributing patches?

the vulnerability report was submitted to the kernel security team and appropriate kernel maintainers. those are the people responsible for patching the kernel, which they did 30 days ago.

reply
> those are the people responsible for patching the kernel, which they did 30 days ago.

They patched 2 of 7 supported kernels.

reply
Guess the other supported kernels aren't supported enough
reply
I see, may the people who are responsible for the infrastructure you depend on be less concerned about shifting blame than you are.
reply
imagine you use a dependency in your code. like left-pad. and some vulnerability is found in left-pad.

is the reporter of that vulnerability responsible for finding and submitting a vulnerability report to every single piece of software that uses left-pad? all ~millions of them?

or do they submit the report to left-pad, get them to fix it at the source, and trust that the people relying on left-pad will update their software like they should when they see a security-relevant update is available?

reply
> the groups responsible for distributing patches?

Those groups don't exist, to my knowledge. And probably can't, realistically speaking.

reply
deleted
reply