upvote
> I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?

I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.

reply
> We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use.

The need for this feature/functionality in the fist place is questioned by some:

> As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).

> The algorithm being used in this [specific] exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API. […]

* https://news.ycombinator.com/item?id=47952181#unv_47956312

reply
> a security bug in them that one time?

More than one time.

> a cryptographic performance enhancement feature

It's very rarely used.

> Did we blacklist OpenSSL's binaries after Heartbleed?

No, but lots of companies have since migrated away. OpenSSL was harder to move away from because there weren't as obvious drop-in replacements. Blocking a syscall that you never actually used is simple and effective.

reply
In fairness, after heartbleed - there was quite a push to move away from openSSL - like Google's boring ssl, openbsd libressl and Mozilla/nss or gnutls - but the alternative here would be moving to a different kernel, like freebsd or open Solaris/Illumos ...
reply
that's just moving to kernel that had 1000x less eyes on it. Yeah sure it will have less exploits but purely because nobody bothers to look when there are much juicer targets on Linux.

But I am disappointed that we still don't have clear OpenSSL successor, there is nothing to be salvaged from this mess of a project

reply
1000x less eyes is true, but also: Linux, even in the kernel, has a long history of "move fast and break things".

Yes, the syscall API is (famously) stable, but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).

reply
> We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time?

To my knowledge, not many things were using the in-kernel code anyways, the recommended way is to use userland tools...

It's optional for openssl, systemd apparently needs it, but deleting the module from one of my systems didn't cause any issues. /shrug

reply
I haven't had it loaded on 100s of servers ranging kernel version from 5.10 to 6.14. The use is just that low
reply
iiuc the AF_ALG interface only offers real performance wins if you have specialized hardware that the kernel can offload computations to. If you're not using that hardware, there's little reason not to do the crypto in userspace.
reply
In fact, the authors specifically say on the very first line of their website that the copy/fail primitive can be used as a container escape. The entire premise of this article is flawed and irresponsible.
reply
AIUI they haven't shown a container escape and are just claiming it so far. Or did I miss something?
reply
Having write access on anything you can read should be enough if libraries or binaries are shared (read-only) between the host and container.
reply
deleted
reply
> I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.

there is no reason it would be default policy. Else might as well block every socket and just multiplex everything on stdin/out

reply
The reason is that it's very rarely used and has a history of issues.
reply
I'd have guessed that the default paranoia-first policy would be "drop everything; verify what you need" which would include AF_ALG.

share and enjoy!

reply
>might as well block every socket and just multiplex everything on stdin/out

You may be on to something…

reply
I just contributed this [1] which does what you want for seccomp. Well, not by default, but profiling is now effective against this attack.

Oh, an this [2] just happened

[1] https://github.com/containers/oci-seccomp-bpf-hook/pull/209 [2] https://github.com/moby/moby/pull/52501

reply
There is an addendum at the bottom where they admit the page corruption is still problematic even with rootless podman.

Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?

reply
These sorts of vulns are extremely common on Linux. This one is making the rounds for various reasons but it's a good justification for a migration away from containers if your threat model is concerned about it.

MicroVMs have much lower attack surface and you can even toss a container into one if you'd like.

Or use gvisor, which mitigates this vulnerability.

reply
Containers were never a security boundary. VMs have better isolation, which is why people choose them for security. Containers are convenience and usually have better performance.
reply
I see the ‘not a security boundary’ thing repeated constantly, and while it makes sense (eg. they’re sharing the underlying kernel or at least some access to it) if you think about it a little more, VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common. A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
reply
> […] VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common.

VMs are not different due to 'magic' but through hardware assist with things like Intel VT-x and AMD-V:

* https://en.wikipedia.org/wiki/X86_virtualization#Hardware-as...

* https://blog.lyc8503.net/en/post/hypervisor-explore/

* https://binarydebt.wordpress.com/2018/10/14/intel-virtualisa...

reply
You are obviously right that these are similar in principle: VM isolation exploit would lead to the same exposure like container-related isolation exploits.

VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.

If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?

reply
I would say it's the fact that "not a security boundary" appears to be a pass/fail statement, whereas the reality is more like a security continuum, along which VMs are further than containers.
reply
Containers are a security boundary, yes.

> A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers

Yeah this almost never happens though whereas Linux privesc is 10x a day.

reply
They may not provide isolation as VMs but they clearly do limit some attacks. VMs do not provide the same isolation as using physically separate hardware either.

I would have thought they provide better isolation than using multiple users which is the traditional security boundary.

It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?

reply
Containers are a convenience boundary and they increase complexity of your risk assessments.

It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?

reply
Security scanners already support most container and VM image formats in widespread use.

Does this increase complexity? Yes, it does. Is it worth the cost? Depends on each individual case IMO.

reply
> Security scanners already support most container and VM image formats in widespread use.

E.g.,

> Container Security stores and scans container images as the images are built, before production. It provides vulnerability and malware detection, along with continuous monitoring of container images. By integrating with the continuous integration and continuous deployment (CI/CD) systems that build container images, Container Security ensures every container reaching production is secure and compliant with enterprise policy.

* https://docs.tenable.com/enclave-security/container-security...

reply
You need a tool like Anchore and PrismaCloud to scan the container images then monitor them in runtime with PrismaCloud. Trellix can “scan” however most people turn off or exclude container directories on the host because it can interfere with the running container.
reply
I've not looked for podman but moby/docker I believe does now block this https://github.com/moby/profiles/commit/7158007a83005b14a24f...
reply