upvote
Worker processes are forked from the master, which means they receive the same memory layout. You get unlimited crashes against the worker. There's probably a way to exploit that to get a read oracle. At the very least this is a reliable denial of service.

Depth First's full writeup: https://depthfirst.com/research/nginx-rift-achieving-nginx-r...

reply
Sure, but I think the github README ought to make it more clear the POC as-is doesn't work against nginx on any current Linux distro.
reply
So you're not vulnerable to script-kiddies running the published PoC. Still probably vulnerable to to a sufficiently-motivated attacker.
reply
I doubt it: aslr is not as easy to break on modern Linux as everyone in this thread wants to pretend it is. And anybody who actually cares so much about security that a compromised web frontend is the end of the world should be doing other things which would additionally mitigate this...

I know they claimed they can bypass it: if that's true, they should publish it. The forking nature of nginx is uniquely bizarre and vulnerable, and I strongly suspect that's the only way they're pulling it off. I feel like that's the interesting thing here, not the buffer overrun.

reply
Apache used forked processes; I don't think that's unique or a particular issue. NGINX uses async io to handle requests, which is a substantial upgrade from Apache; that's why it's performant.

Memory corruption vulnerabilities are possible whenever a language is used that performs copies of data across buffers without in-language guards.

This vulnerability does not require knowledge of the memory layout to generate worker crashes against a system with vulnerable configurations.

The vulnerability is not the end of the world. System administrators will upgrade nginx with the security patch when it's released across most distribution paths (right now it's available only on unstable Debian for example). In the meantime sysadmins will likely remove the vulnerable directives from nginx configs.

reply
> Apache used forked processes; I don't think that's unique or a particular issue.

Of course it is... in a typical threaded daemon, the threads have randomized stack addresses. Exactly as you observed, you get unlimited tries because nginx dutifully restarts the worker process with the same literal stack address every time it segfaults. I'm willing to bet the ASLR break they claim to have relies on that, but I'd be happy to be proven wrong if they publish it :)

reply
This a heap exploit. Threads share heap access with the main process.
reply
I mean... you're missing the forest for the trees, but yes I meant "address space" generally not "stack" specifically. The nginx threads are forked, it would not be that terribly complex to set up a heap with a new random address base in each worker (the only real complexity is dealing with heap allocations which happened before fork()). But the stack matters too, generally moreso.
reply
In your software, you set up a new heap for every pthread? I have never encountered this design pattern and would like to learn more.
reply
deleted
reply