I think it was a bad idea to put cryptographic APIs or VPN in the kernel. If userspace is too slow for this, you should either reduce context switch overhead, or create special kind of processes, which are isolated, but quick to switch into. They are repeating Windows mistakes.
I don't think it was a bad idea, doing any idea requires an investment and a better investment would have been kernel layer, just ask the history of export control law what the US feared breaking more. Having security in userland means attacks in kernel or in userland are worthwhile against it. In the kernel it could have been secured better than OpenSSL was with less resources and could have had keys unavailable from userland. Instead it got basically no uptake as everyone hobbled along on slightly more resources spread even thinner on OpenSSL clones.
It sounds like they are saying the exploit works but the proof-of-concept doesn't due to superficial reasons(?) That hardly seems like something to brag about.
1. I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
2. The write-to-RO-page-cache primitive STILL WORKED! It’s just that the particular exploit used had no meaningful effect in the already-root-in-a-container context. If you think you are safe, you’re probably wrong. All you need to make a new exploit is an fd representing something that you aren’t supposed to be able to write. This likely includes CoW things where you are supposed to be able to write after CoW but you aren’t supposed to be able to write to the source.
So:
- Are you using these containers with a common image or even a common layer in an image to isolate dangerous workloads from each other. Oops, they can modify the image layers and corrupt each other. There goes any sort of cross-tenant isolation.
- What if you get an fd backed by the zero page and write to it? This can’t result in anything that the administrator would approve of.
- What if you ro-bind-mount something in? It’s not ro any more.
> I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?
I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.
> We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use.
The need for this feature/functionality in the fist place is questioned by some:
> As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).
> The algorithm being used in this [specific] exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API. […]
> Did we blacklist OpenSSL's binaries after Heartbleed?
No, but lots of companies have since migrated away. OpenSSL was harder to move away from because there weren't as obvious drop-in replacements. Blocking a syscall that you never actually used is simple and effective.
In fairness, after heartbleed - there was quite a push to move away from openSSL - like Google's boring ssl, openbsd libressl and Mozilla/nss or gnutls - but the alternative here would be moving to a different kernel, like freebsd or open Solaris/Illumos ...
that's just moving to kernel that had 1000x less eyes on it. Yeah sure it will have less exploits but purely because nobody bothers to look when there are much juicer targets on Linux.
But I am disappointed that we still don't have clear OpenSSL successor, there is nothing to be salvaged from this mess of a project
1000x less eyes is true, but also: Linux, even in the kernel, has a long history of "move fast and break things".
Yes, the syscall API is (famously) stable, but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).
iiuc the AF_ALG interface only offers real performance wins if you have specialized hardware that the kernel can offload computations to. If you're not using that hardware, there's little reason not to do the crypto in userspace.
In fact, the authors specifically say on the very first line of their website that the copy/fail primitive can be used as a container escape. The entire premise of this article is flawed and irresponsible.
> if libraries or binaries are shared (read-only) between the host and container.
Yeah, exactly - that's a pretty big "if", and not how a lot of container automation does things. In particular you'd need to hit the base system, it's no help at all if some application files that the host does nothing with can be hit.
How do you propose to implement that "drop everything except what you need" policy? Do your containers come with a detailed list of which OS services and syscalls are required? I think your idea has the same issue as what held back the adoption of selinux: many developers think that having to enumerate their application's behaviour like that is an undue burden.
A compounding issue is that using AF_ALG doesn't require a separate syscall: it's just using SYS_socket with the first argument set to 38. Your container behaviour specification needs to be specific enough to not only enumerate allowed syscalls, but the allowed values for each syscall parameter.
There are those who are paranoid and those who are expedient. If you're truly paranoid, you spin up the thing you want to run, measure what it does, and open the holes to allow it to do what it needs to. It's tedious and sometimes error-prone, but in some environments it is necessary.
In the vast majority of the world, you set permissions to what's reasonable and trust that most of the time things will work out pretty well and have a plan for if you need to fix things on the fly.
I personally am not terribly paranoid, but I've worked places where we had to be pretty paranoid (shared hosting).
There is an addendum at the bottom where they admit the page corruption is still problematic even with rootless podman.
Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?
Containers were never a security boundary. VMs have better isolation, which is why people choose them for security. Containers are convenience and usually have better performance.
I see the ‘not a security boundary’ thing repeated constantly, and while it makes sense (eg. they’re sharing the underlying kernel or at least some access to it) if you think about it a little more, VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common. A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
You are obviously right that these are similar in principle: VM isolation exploit would lead to the same exposure like container-related isolation exploits.
VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.
If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?
I would say it's the fact that "not a security boundary" appears to be a pass/fail statement, whereas the reality is more like a security continuum, along which VMs are further than containers.
> A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
Yeah this almost never happens though whereas Linux privesc is 10x a day.
They may not provide isolation as VMs but they clearly do limit some attacks. VMs do not provide the same isolation as using physically separate hardware either.
I would have thought they provide better isolation than using multiple users which is the traditional security boundary.
It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?
Containers are a convenience boundary and they increase complexity of your risk assessments.
It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?
> Security scanners already support most container and VM image formats in widespread use.
E.g.,
> Container Security stores and scans container images as the images are built, before production. It provides vulnerability and malware detection, along with continuous monitoring of container images. By integrating with the continuous integration and continuous deployment (CI/CD) systems that build container images, Container Security ensures every container reaching production is secure and compliant with enterprise policy.
You need a tool like Anchore and PrismaCloud to scan the container images then monitor them in runtime with PrismaCloud. Trellix can “scan” however most people turn off or exclude container directories on the host because it can interfere with the running container.
These sorts of vulns are extremely common on Linux. This one is making the rounds for various reasons but it's a good justification for a migration away from containers if your threat model is concerned about it.
MicroVMs have much lower attack surface and you can even toss a container into one if you'd like.
Or use gvisor, which mitigates this vulnerability.
This pollutes the page cache, which affects the entire host. Getting "root" in a rootless container may mean nothing. But if it attacked the ls, ps, cat, grep, etc. commands and any process outside the container invokes that command it runs the payload of the attacker. What if the payload of the attack is just the same attack to escalate to root? So now you have escaped the container and gained root.
If the goal is just preventing full root privileges, a CapabilityBoundingSet in a systemd unit will do.
However copy fail can be used in many other ways not contained by containers or the above settings. For example it can modify the /etc/ssl/certs to prepare for MitM attacks. If you have multiple containers based on the same image then one compromised CA set affects another.
I could be wrong, but I’m not sure those settings are enough to mitigate Copy Fail.
If your distro offers a patched kernel, it’s best to upgrade to that one and reboot.
You can also disable the vulnerable module (how to do it depends on what distro you’re using). But if you stay on an old unpatched kernel you might be exposed to other vulnerabilites.
tl;dr - within the container, the exploit works, and elevates to root (uid 0) within the container - BUT because that namespace actually maps to uid 1000 (the user) outside the container, the escalation does not flow up to the host.
But… does this escape the container? If not (the author seems to indicate it does not) then does it matter if you are in Docker or rootless Podman, right, since the end result is always: you have elevated to root within the container. If the rest of the container filesystem isolation does its job, the end result is the same? Though I guess another chained exploit to escape the container would be worse in Docker? Do I have that right?
If any security relevant file from the host is mounted into the container this could be exploited quite easily. It is definitely a viable tool for escaping containers but it would require a bit of an attack chain and some containers may not be vulnerable.
This is a problem and most people hadn’t considered it before because the caching is done to speed up build pipeline performance:
“ While rootless containers prevent the attacker from escalating to host root, the page cache is still shared across the host. Containers that re-use the same base image layers share the same cached pages for those layers — if a malicious CI job corrupts a binary in the page cache, other containers launched from that same image could end up executing the poisoned version.”
I'm no expert, but the kernel is shared between all containers and the host.
I don't believe the kernel maintains separate page caches for each container; a malicious CI job could corrupt a binary from any container, or the host.
It’s manipulating the binary to make it as small as possible. In golf, the lowest score wins. So, in this context, the smallest binary that still works wins.
Nowhere in the article is mentioned user namespaces completely mitigate the vulnerability, page cache corruption still happens but not being able to obtain root in the target host increases the attack vector to more than just a one liner into having to figure out whether specific shared base image layers are in use and by whom and by what binaries (think of a shared CI platform like the one we run for GNOME).
The article does not prove that you can't get root on the host via page cache corruption, just that the specific exploit strategy they tried didn't work.
There's a specific reason why the exploit targets a setuid binary, if you poison it in memory it will be executed with the permissions of the user owning it, in this case root, meaning a setuid(0) + spawning a new shell will effectively give you root access on the host system, this for systems where uid=0 is equivalent inside and outside the container itself. The vulnerability is still there and is deadly serious, with rootless containers the attack vector just increases, the attacker will have to identify other factors (what containers are using a shared base image, what binaries are being called, what binaries should be overridden in memory etc).
On top of this there's another thing worth mentioning, it's a common thing in Openshift (for non rootless podman) to allow CAP_SETGID/CAP_SETUID for being able to create a container within a container (this is called the allowPrivilegeEscalation in SCCs), that effectively grants you the ability to become uid=root in the container and in that scenario uid=0 matches the host uid=0. The important difference is that specific instance of the root user doesn't have CAP_SYS_ADMIN (or most of the other privileged kernel capabilities) meaning the actions the user can then perform are very limited.
that just prevents the faulty module from loading. So you have time to fix it properly (kernel upgrade)
Technically there should be zero impact (the very very few tools that use it will fall back to userspace), I haven't even found that module loaded in infrastructure
Then check if it is loaded, and if it is, unload/reboot
I would say any sanely written application would fall back to doing the requested operations in userspace if it cannot use the AF_ALG socket.
It could fail though. But I have not yet heard of anyone noticing big problems due to disabling the problematic modules. And I have not noticed any such issues on our systems at ${DAYJOB}.
IMHO, since these parts of the Linux kernel are so crappy I personally would say disabling them is a good default choice. YMMV. But if you encounter problems, then you can always re-enable the modules. (Preferably after upgrading your kernel, obviously.)
I think it was a bad idea to put cryptographic APIs or VPN in the kernel. If userspace is too slow for this, you should either reduce context switch overhead, or create special kind of processes, which are isolated, but quick to switch into. They are repeating Windows mistakes.
It's not faster than userspace, it's much slower normally. On special boards with crypto accelerators it can be faster, and there can be compliance reasons to want it. References: [1] https://www.chronox.de/libkcapi/html/ch01s02.html [2] https://lwn.net/Articles/410763/ [3] https://trac.gateworks.com/wiki/linux/encryption#PerformaceC...
Well at least if it’s crufty stuff like AF_ALG that barely no-one is using and it’s kind of a forgotten place of the kernel.
I don’t oppose reasonable crypto in the kernel, like WireGuard.
I don't think it was a bad idea, doing any idea requires an investment and a better investment would have been kernel layer, just ask the history of export control law what the US feared breaking more. Having security in userland means attacks in kernel or in userland are worthwhile against it. In the kernel it could have been secured better than OpenSSL was with less resources and could have had keys unavailable from userland. Instead it got basically no uptake as everyone hobbled along on slightly more resources spread even thinner on OpenSSL clones.
It sounds like they are saying the exploit works but the proof-of-concept doesn't due to superficial reasons(?) That hardly seems like something to brag about.
If I understand correctly, rootfull podman with --userns=auto would also prevent the privilege escalation ?
How?
Exploit download/source: https://github.com/theori-io/copy-fail-CVE-2026-31431/blob/m...
The dedicated website: https://copy.fail
Sigh.
1. I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
2. The write-to-RO-page-cache primitive STILL WORKED! It’s just that the particular exploit used had no meaningful effect in the already-root-in-a-container context. If you think you are safe, you’re probably wrong. All you need to make a new exploit is an fd representing something that you aren’t supposed to be able to write. This likely includes CoW things where you are supposed to be able to write after CoW but you aren’t supposed to be able to write to the source.
So:
- Are you using these containers with a common image or even a common layer in an image to isolate dangerous workloads from each other. Oops, they can modify the image layers and corrupt each other. There goes any sort of cross-tenant isolation.
- What if you get an fd backed by the zero page and write to it? This can’t result in anything that the administrator would approve of.
- What if you ro-bind-mount something in? It’s not ro any more.
> I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?
I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.
> We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use.
The need for this feature/functionality in the fist place is questioned by some:
> As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).
> The algorithm being used in this [specific] exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API. […]
* https://news.ycombinator.com/item?id=47952181#unv_47956312
> a security bug in them that one time?
More than one time.
> a cryptographic performance enhancement feature
It's very rarely used.
> Did we blacklist OpenSSL's binaries after Heartbleed?
No, but lots of companies have since migrated away. OpenSSL was harder to move away from because there weren't as obvious drop-in replacements. Blocking a syscall that you never actually used is simple and effective.
In fairness, after heartbleed - there was quite a push to move away from openSSL - like Google's boring ssl, openbsd libressl and Mozilla/nss or gnutls - but the alternative here would be moving to a different kernel, like freebsd or open Solaris/Illumos ...
that's just moving to kernel that had 1000x less eyes on it. Yeah sure it will have less exploits but purely because nobody bothers to look when there are much juicer targets on Linux.
But I am disappointed that we still don't have clear OpenSSL successor, there is nothing to be salvaged from this mess of a project
1000x less eyes is true, but also: Linux, even in the kernel, has a long history of "move fast and break things".
Yes, the syscall API is (famously) stable, but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).
> We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time?
To my knowledge, not many things were using the in-kernel code anyways, the recommended way is to use userland tools...
It's optional for openssl, systemd apparently needs it, but deleting the module from one of my systems didn't cause any issues. /shrug
I haven't had it loaded on 100s of servers ranging kernel version from 5.10 to 6.14. The use is just that low
iiuc the AF_ALG interface only offers real performance wins if you have specialized hardware that the kernel can offload computations to. If you're not using that hardware, there's little reason not to do the crypto in userspace.
In fact, the authors specifically say on the very first line of their website that the copy/fail primitive can be used as a container escape. The entire premise of this article is flawed and irresponsible.
AIUI they haven't shown a container escape and are just claiming it so far. Or did I miss something?
Having write access on anything you can read should be enough if libraries or binaries are shared (read-only) between the host and container.
> if libraries or binaries are shared (read-only) between the host and container.
Yeah, exactly - that's a pretty big "if", and not how a lot of container automation does things. In particular you'd need to hit the base system, it's no help at all if some application files that the host does nothing with can be hit.
I just contributed this [1] which does what you want for seccomp. Well, not by default, but profiling is now effective against this attack.
Oh, an this [2] just happened
[1] https://github.com/containers/oci-seccomp-bpf-hook/pull/209 [2] https://github.com/moby/moby/pull/52501
> I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
there is no reason it would be default policy. Else might as well block every socket and just multiplex everything on stdin/out
I'd have guessed that the default paranoia-first policy would be "drop everything; verify what you need" which would include AF_ALG.
share and enjoy!
How do you propose to implement that "drop everything except what you need" policy? Do your containers come with a detailed list of which OS services and syscalls are required? I think your idea has the same issue as what held back the adoption of selinux: many developers think that having to enumerate their application's behaviour like that is an undue burden.
A compounding issue is that using AF_ALG doesn't require a separate syscall: it's just using SYS_socket with the first argument set to 38. Your container behaviour specification needs to be specific enough to not only enumerate allowed syscalls, but the allowed values for each syscall parameter.
There are those who are paranoid and those who are expedient. If you're truly paranoid, you spin up the thing you want to run, measure what it does, and open the holes to allow it to do what it needs to. It's tedious and sometimes error-prone, but in some environments it is necessary.
In the vast majority of the world, you set permissions to what's reasonable and trust that most of the time things will work out pretty well and have a plan for if you need to fix things on the fly.
I personally am not terribly paranoid, but I've worked places where we had to be pretty paranoid (shared hosting).
The reason is that it's very rarely used and has a history of issues.
>might as well block every socket and just multiplex everything on stdin/out
You may be on to something…
There is an addendum at the bottom where they admit the page corruption is still problematic even with rootless podman.
Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?
Containers were never a security boundary. VMs have better isolation, which is why people choose them for security. Containers are convenience and usually have better performance.
I see the ‘not a security boundary’ thing repeated constantly, and while it makes sense (eg. they’re sharing the underlying kernel or at least some access to it) if you think about it a little more, VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common. A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
> […] VMs are not magically different: they are better isolated, but VMs on the same host still share the host in common.
VMs are not different due to 'magic' but through hardware assist with things like Intel VT-x and AMD-V:
* https://en.wikipedia.org/wiki/X86_virtualization#Hardware-as...
* https://blog.lyc8503.net/en/post/hypervisor-explore/
* https://binarydebt.wordpress.com/2018/10/14/intel-virtualisa...
You are obviously right that these are similar in principle: VM isolation exploit would lead to the same exposure like container-related isolation exploits.
VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.
If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?
I would say it's the fact that "not a security boundary" appears to be a pass/fail statement, whereas the reality is more like a security continuum, along which VMs are further than containers.
Containers are a security boundary, yes.
> A CVE next week that allows corruption of host state that affects eg every VM under a particular hypervisor will be no less damaging than this CVE is to containers
Yeah this almost never happens though whereas Linux privesc is 10x a day.
They may not provide isolation as VMs but they clearly do limit some attacks. VMs do not provide the same isolation as using physically separate hardware either.
I would have thought they provide better isolation than using multiple users which is the traditional security boundary.
It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?
Containers are a convenience boundary and they increase complexity of your risk assessments.
It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?
Security scanners already support most container and VM image formats in widespread use.
Does this increase complexity? Yes, it does. Is it worth the cost? Depends on each individual case IMO.
> Security scanners already support most container and VM image formats in widespread use.
E.g.,
> Container Security stores and scans container images as the images are built, before production. It provides vulnerability and malware detection, along with continuous monitoring of container images. By integrating with the continuous integration and continuous deployment (CI/CD) systems that build container images, Container Security ensures every container reaching production is secure and compliant with enterprise policy.
* https://docs.tenable.com/enclave-security/container-security...
You need a tool like Anchore and PrismaCloud to scan the container images then monitor them in runtime with PrismaCloud. Trellix can “scan” however most people turn off or exclude container directories on the host because it can interfere with the running container.
These sorts of vulns are extremely common on Linux. This one is making the rounds for various reasons but it's a good justification for a migration away from containers if your threat model is concerned about it.
MicroVMs have much lower attack surface and you can even toss a container into one if you'd like.
Or use gvisor, which mitigates this vulnerability.
I've not looked for podman but moby/docker I believe does now block this https://github.com/moby/profiles/commit/7158007a83005b14a24f...
> [...] that root was just my unprivileged podman user on the host
Couldn't you then simply re-run the exploit again as unprivileged podman user and gain root on the host?
No, because you're still in the container, and there's no route to the host's root from there.
If you can orchestrate a container escape from the container's "root", then you're on to something.
This pollutes the page cache, which affects the entire host. Getting "root" in a rootless container may mean nothing. But if it attacked the ls, ps, cat, grep, etc. commands and any process outside the container invokes that command it runs the payload of the attacker. What if the payload of the attack is just the same attack to escalate to root? So now you have escaped the container and gained root.
did anyone try it? it suppose to work right?
If the goal is just preventing full root privileges, a CapabilityBoundingSet in a systemd unit will do.
However copy fail can be used in many other ways not contained by containers or the above settings. For example it can modify the /etc/ssl/certs to prepare for MitM attacks. If you have multiple containers based on the same image then one compromised CA set affects another.
I added these
to my .service. Is it good enough?Good enough for what?
I could be wrong, but I’m not sure those settings are enough to mitigate Copy Fail.
If your distro offers a patched kernel, it’s best to upgrade to that one and reboot.
You can also disable the vulnerable module (how to do it depends on what distro you’re using). But if you stay on an old unpatched kernel you might be exposed to other vulnerabilites.
tl;dr - within the container, the exploit works, and elevates to root (uid 0) within the container - BUT because that namespace actually maps to uid 1000 (the user) outside the container, the escalation does not flow up to the host.
But… does this escape the container? If not (the author seems to indicate it does not) then does it matter if you are in Docker or rootless Podman, right, since the end result is always: you have elevated to root within the container. If the rest of the container filesystem isolation does its job, the end result is the same? Though I guess another chained exploit to escape the container would be worse in Docker? Do I have that right?
If any security relevant file from the host is mounted into the container this could be exploited quite easily. It is definitely a viable tool for escaping containers but it would require a bit of an attack chain and some containers may not be vulnerable.
This is a problem and most people hadn’t considered it before because the caching is done to speed up build pipeline performance:
“ While rootless containers prevent the attacker from escalating to host root, the page cache is still shared across the host. Containers that re-use the same base image layers share the same cached pages for those layers — if a malicious CI job corrupts a binary in the page cache, other containers launched from that same image could end up executing the poisoned version.”
I'm no expert, but the kernel is shared between all containers and the host.
I don't believe the kernel maintains separate page caches for each container; a malicious CI job could corrupt a binary from any container, or the host.
Only if there is a shared inode between host and container.
Running sstrip on an ELF binary is called ELF "golfing"? TIL…
It is, although real ELF golfers consider that a little naive.
It does feel a little simplistic to get a special name. But lesser things have gotten fancier names...
Sorry for posting a n00b question, but could you share etymology on this term golfing?
It’s manipulating the binary to make it as small as possible. In golf, the lowest score wins. So, in this context, the smallest binary that still works wins.
In golf, lower scores are better.
This feels LLM generated, lots of emdashes and even more text around a completely false premise.
What is the false premise in the article?
That rootless containers mitigate kernel exploits.
Nowhere in the article is mentioned user namespaces completely mitigate the vulnerability, page cache corruption still happens but not being able to obtain root in the target host increases the attack vector to more than just a one liner into having to figure out whether specific shared base image layers are in use and by whom and by what binaries (think of a shared CI platform like the one we run for GNOME).
The article does not prove that you can't get root on the host via page cache corruption, just that the specific exploit strategy they tried didn't work.
There's a specific reason why the exploit targets a setuid binary, if you poison it in memory it will be executed with the permissions of the user owning it, in this case root, meaning a setuid(0) + spawning a new shell will effectively give you root access on the host system, this for systems where uid=0 is equivalent inside and outside the container itself. The vulnerability is still there and is deadly serious, with rootless containers the attack vector just increases, the attacker will have to identify other factors (what containers are using a shared base image, what binaries are being called, what binaries should be overridden in memory etc).
On top of this there's another thing worth mentioning, it's a common thing in Openshift (for non rootless podman) to allow CAP_SETGID/CAP_SETUID for being able to create a container within a container (this is called the allowPrivilegeEscalation in SCCs), that effectively grants you the ability to become uid=root in the container and in that scenario uid=0 matches the host uid=0. The important difference is that specific instance of the root user doesn't have CAP_SYS_ADMIN (or most of the other privileged kernel capabilities) meaning the actions the user can then perform are very limited.
I know how the reference exploit works, but that's not the only way to exploit the bug.
It's a shame, this seems like an interesting topic but I can't get past the blatant AI-isms littered throughout.
>This is not raw shellcode — it is a fully formed ELF executable
Please post a tl;dr at the top or even in the subject. Many of us are scrambling to patch/reboot our **.
This isn't a new CVE. It's just documenting what happened when this person ran the exploit inside a certain type of container.
tl;dr (not from article)
that just prevents the faulty module from loading. So you have time to fix it properly (kernel upgrade)Technically there should be zero impact (the very very few tools that use it will fall back to userspace), I haven't even found that module loaded in infrastructure
Then check if it is loaded, and if it is, unload/reboot
Though this won't work for some kernels:
If algif_aead was a builtin module, it needs to be disabled by adding initcall_blacklist=algif_aead_init to the boot cmdline.
However initcall_blacklist requires the kernel to be built with CONFIG_KALLSYMS.
Dumb question: is preventing the module from loading safe to blindly run on, e.g., Unraid, Proxmox, WSL2? Is it possible to break anything?
I would say any sanely written application would fall back to doing the requested operations in userspace if it cannot use the AF_ALG socket.
It could fail though. But I have not yet heard of anyone noticing big problems due to disabling the problematic modules. And I have not noticed any such issues on our systems at ${DAYJOB}.
IMHO, since these parts of the Linux kernel are so crappy I personally would say disabling them is a good default choice. YMMV. But if you encounter problems, then you can always re-enable the modules. (Preferably after upgrading your kernel, obviously.)
It already has a table of contents. The heading titled "why rootless containers stopped the escalation" is your tl;dr.