ASLR is a defense-in-depth technique intended to make exploitation more difficult. In almost all cases it is only a matter of time and skill to also include an ASLR bypass. Both requirements continue being lowered by LLM agents every few weeks. It is only a matter of time (and probably not a lot of time) until a fully weaponized exploit is developed. It may be published, it may also be kept private.
It is straight up wrong to say "if you have ASLR enabled, you're not at any risk from this" and saying this is extremely harmful for anyone that trusts claims like that.
This wrong belief that you shouldn't care about security vulnerabilities because mitigations may make exploitation more difficult has already caused so much harm in the past. Be glad that modern mitigations exist, but patch your stuff asap. If you are a vendor, do not treat vulnerability reports as invalid because the researcher has not provided an ASLR bypass. Fix the root cause and hope mitigations buy you enough time to patch before you get owned.
It's a matter of time before this exploit is chained with an ASLR bypass, but it allows for a slightly wider patch window at the very least.
At the moment though, the preconditions look odd. I've been using nginx in various constellations for 10 years and never once combined rewrite and set.
Not extremely common, but it does happen.
I disagree with this take, or I would at least phrase it differently. ASLR is like an extra password you need to guess. It has certain amount of entropy and it is usually stable. Unless vulnerability has a portion that leaks information, ASLR completely mitigates it - or you need a second vulnerability. And that is a different conversation. ASLR can completely mitigate individual vulnerability, but not possibly exploit chain.
I would use the argument of possible second vulnerability that leaks information for making people patch quickly anyway. But exploit chains are risk for all kinds of vulns.
I suppose to keep the password analogy together, people reuse passwords all the time, timing attacks exist, etc?
History shows that "meh, ASLR mitigates this" is a vastly bolder claim anyway, so I don't feel much need to defend my position here.
Edit: Even the authors of this poc seem to agree with me https://depthfirst.com/research/nginx-rift-achieving-nginx-r...
Obviously you need to defend, that is quite generalization there. You need to prove how the vulnerability itself reduces the entropy of ASLR.
The authors don't really give support for that. They just say that they can brute-force it without crashing the whole Nginx. But they don't say how the entropy is reduced. They have zero information where the child process even starts, whether they hit the child, or if it even is the same child. So you should provide us technical and precise reasoning why it is not mitigating?
> You need to prove how the vulnerability itself reduces the entropy of ASLR
Not really? Looks like we have a controlled-length overflow on a fork-based server, a situation where ASLR is known to not be very useful.
It does not work like that - it has certain pre-condition requirements. You also need a reliable oracle which tells information when you actually hit the child process, whether child crashes and whether you are even in the same child. When you can retrieve this information, you are then removing re-randomization between attempts. That reduces the entropy, but it only helps if remaining search space is small enough. They don't show that they have oracle.
Additionally, for RCE, you need to find libc base and that is randomized alone. Authors just ignored in the post how they got that address. For that, you most likely need the information leak from second vulnerability, even if you can brute force the actual vulnerability.
You can safely assume a 1:1 overlap between the people that claim "AI will solve cyber" (and they always say 'cyber') and the people saying this.
Kind of feels like the burden is on the one who is reading it though, good luck stopping people from spreading misinformation on the internet, most of them don't even know they're wrong.
What's extremely harmful is trusting random internet comments stating stuff confidently. Get good at seeing through that, and it'll serve you well in security and beyond.
Requires a "rewrite" directive with a questionmark in the replacement string, and then a subsequent "set" directive that references a regex capture group (e.g. set $var $1).
Also the POC assumes ASLR is disabled.
If you were to do it by hand, nginx doesn't come to mind as a likely candidate.
# sysctl kernel.randomize_va_space
kernel.randomize_va_space = 2
Typical invocation: checksec.sh --proc-all
This invocation will list the status of RELRO, Stack Canary, NX/PaX, PIE of all running daemons. My CachyOS installation for example is missing Stack Canaries for all daemons. checksec.sh --fortify-proc 732
* Process name (PID) : sshd (732)
* FORTIFY_SOURCE support available (libc) : Yes
* Binary compiled with FORTIFY_SOURCE support: N
Some additional compile time hardening options [2] and discussion [3]. Even Rust apparently has some compile time security related options.[1] - https://www.trapkit.de/tools/checksec/ # some Linux repositories already contain "checksec".
[2] - https://best.openssf.org/Compiler-Hardening-Guides/Compiler-...
Apache still runs about 23-28% of websites (with some measurements suggesting it is pretty close to equal with nginx). PHP is still in use by 70-80% of websites (numbers vary depending on where you look).
You make it sound like both pieces of tech are irrelevant. Nothing could be further from the truth.
some quick googled examples (like I said other sites' numbers vary, but you get the general idea):
https://www.wappalyzer.com/technologies/web-servers/ https://kinsta.com/php-market-share/
As noted elsewhere, ASLR protects you. While you are waiting for your affected platform to get the fix, they note the mitigation:
"use named captures instead of unnamed captures in rewrite definition"
"To mitigate this vulnerability for this example, replace $1 and $2 with the appropriate named captures, $user_id and $section"
F5 patched 1.31.0 and 1.30.1.
OpenResty has a patch for 1.27 and 1.29: https://github.com/openresty/openresty/commit/ee60fb9cf645c9...
You can track OpenResty's (a Lua application server based on Nginx) progress here: https://github.com/openresty/openresty/issues/1119
Depth First's full writeup: https://depthfirst.com/research/nginx-rift-achieving-nginx-r...
I know they claimed they can bypass it: if that's true, they should publish it. The forking nature of nginx is uniquely bizarre and vulnerable, and I strongly suspect that's the only way they're pulling it off. I feel like that's the interesting thing here, not the buffer overrun.
Memory corruption vulnerabilities are possible whenever a language is used that performs copies of data across buffers without in-language guards.
This vulnerability does not require knowledge of the memory layout to generate worker crashes against a system with vulnerable configurations.
The vulnerability is not the end of the world. System administrators will upgrade nginx with the security patch when it's released across most distribution paths (right now it's available only on unstable Debian for example). In the meantime sysadmins will likely remove the vulnerable directives from nginx configs.
Of course it is... in a typical threaded daemon, the threads have randomized stack addresses. Exactly as you observed, you get unlimited tries because nginx dutifully restarts the worker process with the same literal stack address every time it segfaults. I'm willing to bet the ASLR break they claim to have relies on that, but I'd be happy to be proven wrong if they publish it :)
https://presentations.nordisch.org/apparmor/
https://github.com/nobody43/apparmor-profiles/blob/master/ng...
https://github.com/nobody43/apparmor-suggest
Disclaimer: I'm the author of both repos.
On the one hand Apache and Nginx are mature and proven but, being written in C, they will always suffer from memory-safety issues like this one and the recent Apache vulnerabilities.
On the other hand, the alternatives are perhaps not as mature and perhaps not implemented as securely as they could be, given that e.g. Caddy had multiple vulnerabilities in its request parsing this year and Jetty's shell injection vulnerability seems easily foreseeable and avoidable. Using a memory-safe language doesn't help much if you then (to take an unrelated but well-known example) implement arbitrary code execution as a feature in the logging library.
People keep forgetting that with static linking they are back to 1980's IPC for application extensions, or building from scratch every time they need to reconfigure the application.
But I haven't seen a whole lot of discussion of http servers in memory safe languages. The big three C-based servers: Apache, Nginx, and lighttpd are all pretty solid... I don't think there's a lot of people interested in giving that up for a new project just because of the language.
I'll also add that when you pick up most memory safe languages, you're also picking up their sometimes extensive runtime / virtual machine and all the accoutrements. A Java webserver probably uses log4j because any random Java project probably does, etc.
Most nginx use cases are to end tls and then pass the request to node/php/go/etc. So, I bet you have at least one set with attacker controller data on a line like 'proxy_set_header X-Host $host;'
edit: nvm. aparently named captures are not affect. Unless you have a $1 somewhere, it should be fine.
# regex to split $uri to $fastcgi_script_name and $fastcgi_path
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;First time I’ve seen feng shui used in this manner..?
Honestly it's such a weird feature, if you're doing complicated redirects like this in nginx where PCRE is necessary, you should do it in your application code. And if you need speed use ngx_http_lua_module.
Am I understanding you correctly?
It triggers on a very common pattern: a `rewrite` directive (with an unnamed capture like $1/$2 and a `?` in the replacement string) followed by `set`, `if`, or another `rewrite`. The root cause is a classic two-pass script engine bug (length calculation vs. actual copy pass with ngx_escape_uri).
The PoC turns it into unauthenticated RCE using cross-request heap feng shui + pool cleanup pointer corruption. Tested with a simple Docker setup.
- Repo + Python exploit: https://github.com/DepthFirstDisclosures/Nginx-Rift - Full technical write-up: https://depthfirst.com/research/nginx-rift-achieving-nginx-r... - F5 advisory + patches (1.31.0 / 1.30.1 for OSS, plus Plus updates): https://my.f5.com/manage/s/article/K000160932 (or the latest K000161019)
Affects basically any NGINX doing URL rewriting in front of apps/PHP/etc. Workaround mentioned is switching to named captures.
The discovery angle is also interesting — it was found autonomously by depthfirst's security analysis tool after one-click onboarding of the NGINX source.
Anyone running NGINX in production using rewrite rules? How are you checking your configs? Thoughts on the exploit chain or the AI-assisted finding process?
https://world.hey.com/dhh/finished-software-8ee43637 https://josem.co/the-beauty-of-finished-software/
Doesn't change the fact that only "breaking" changes in 1.x.x line are changes to defaults.
The venerable unix tool "less" is on v701 and was probably already over 300 before react was born
How do you think versioning works? You know that it's completely arbitrary and up to the author, right? Very ironic comment.