upvote
I can understand the motivation here.

One of my hobby open source projects includes multiple services and I don't want to have to start and stop them individually just to test anything. They're designed to run standalone since it's a distributed system but having having to launch and stop each individual process was adding friction that lowered my enjoyment of working on it.

I recently ended up redesigning each service so they can run as a process or within a shared process that just uses LoadLibrary/dlopen (when not statically linked) to be able to quickly bring everything up and down at once, or restart a specific service if the binary changes.

Sure, everything will crash rather than just one service but it's lightweight (no need for complex deployment setups) and made the project far more enjoyable to work on. It's adequate for development work.

Another plus has been a much cleaner architecture after doing the necessary untangling required to get it working.

reply
It's certainly an interesting idea and I'm not wholly opposed to it, though I certainly wouldn't use it as a default process scheduler for an OS (not that you were suggesting that). I would be very concerned about security stuff. If there's no process boundary saying that threadproc A can't grab threadproc B's memory, there could pretty easily be unintended leakages of potentially sensitive data.

Still, I actually do think there could be an advantage to this if you know you can trust the executables, and if the executables don't share any memory; if you know that you're not sharing memory, and you're only grabbing memory with malloc and the like, then there is an argument to be made that there's no need for the extra process overhead.

reply