upvote
Not true, if there’s any evidence of the exploit being used in the wild, it’s much more responsible to release immediately.

Considering that the patches have been available for a while, someone surely reversed what they were for and was actually exploiting this in the wild.

In the age of AI, I’d argue that “responsible disclosure” is dead. Arguably even in closed source projects. Just ask Claude to do a diff between the previous version and to see whether anything fixed in there could have had security implications.

We’re not there yet, but very soon the only way to responsibly disclose a vulnerability will be immediately.

reply
But they didn't release immediately -- they waited a month, but forgot to tell the distros, and forgot to check if waiting a month had actually lead to distros picking up the patches and shipping them.
reply
Which just reinforces my point. The patch was available, therefore, where the exploit lies was also available.

Linux kernel is one of the most audited open-source projects ever. I guarantee you that someone did reverse the patch.

> but forgot to tell the distros

Probably an oversight, but irrelevant. The bug was in the linux kernel. It's insane to suggest that they should have notified everyone shipping the linux kernel.

reply
“Made it into the wild?” Patches landed a month ago. Should they also wait until my linksys router from 2018 has a patch ready?
reply
Patches are still in the process of landing in most major distros as of the time of this writing. Most users are not able to get an update through their distro's packaging mechanisms.
reply
It's a local vulnerability at least. How many people do you let log in to your router?

With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited. Even with shared hosting, you generally have root in your VM or container anyway. Unless this enables an escape from that?

Still the risk that people who run "curl | bash" without care could get bitten, but usually its "curl | sudo bash" anyway...

reply
> Even with shared hosting, you generally have root in your VM or container

Lots of shared hosters don't use VMs or containers. It's some arbitrary number of people logging in to a shared system, each one with a home directory under /home/THE_USER_NAME. i've had several such hosters over the years (thankfully not right now, though).

reply
> With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited

Things like HPC clusters are multiuser & don't entirely trust their users. If they did we wouldn't need users/groups/permissions etc in the first place.

reply
Yes. Not even just HPC clusters, shared login servers are pretty common in academia. I manage several in our lab. Sure, we mostly trust the users against malice more or less but not so much against incompetence. A malicious vscode plugin would run rampant in this space.

And then there are users running claude-cli and friends who may just find it convenient to use a local root exploit to remove obstacles.

reply
With this exploit it's trivial to jump from one container to another neighbor container. I've tried it and succeeded.

So containers don't protect you, only a VM.

reply
So anyone pulling a malicious dockerfile jeopardizes the host? That would be bad...
reply
...no shit? Why do you think people care about this issue?
reply
> I've tried it and succeeded.

How so?

reply
Local root is part of the path to escaping
reply
That's mostly on Greg, a bit on the author.
reply
Fedora is patched.
reply