After building this, I don't think this is necessarily a desirable trade-off, and decades of OS development certainly suggest process-isolated memory is desirable. I see this more as an experiment to see how bending those boundaries works in modern environments, rather than a practical way forward.
One of my hobby open source projects includes multiple services and I don't want to have to start and stop them individually just to test anything. They're designed to run standalone since it's a distributed system but having having to launch and stop each individual process was adding friction that lowered my enjoyment of working on it.
I recently ended up redesigning each service so they can run as a process or within a shared process that just uses LoadLibrary/dlopen (when not statically linked) to be able to quickly bring everything up and down at once, or restart a specific service if the binary changes.
Sure, everything will crash rather than just one service but it's lightweight (no need for complex deployment setups) and made the project far more enjoyable to work on. It's adequate for development work.
Another plus has been a much cleaner architecture after doing the necessary untangling required to get it working.
Still, I actually do think there could be an advantage to this if you know you can trust the executables, and if the executables don't share any memory; if you know that you're not sharing memory, and you're only grabbing memory with malloc and the like, then there is an argument to be made that there's no need for the extra process overhead.
If you're ok with threads keeping their own memory and not sharing then pthreads already do that competently without any additional library. The problem with threads is that there's a shared address space and so thread B can screw up thread A's memory and juggling concurrency is hard and you need mutexes. Processes give isolation but at the cost of some overhead and IPC generally requiring copying.
I'm just not sure what this actually provides over vanilla pthreads. If I'm in charge of ensuring that the threadprocs don't screw with each other then I'm not sure this buys me anything.
without locking if multiple of these things would read or write to the same place the CPU will not appreciate it... u might read or write partials or garbage etc.?
still a fun little project but i dont see any use case personally. (perhaps the author has a good one, its not impossible)
I fail to see the point - if you control the code and need performance so much that an occassional copy bites, you can as well just link it all into a single address space without those hoopjumps. It won't function as separate processes if it's modified to rely on passing pointers around anyway.
And if you don't, good luck chasing memory corruption issues.
Besides, what's wrong with shared memory?
I generally think that it's bad to share memory for anything with concurrency, simply because it can make it very hard to reason about the code. Mutexes are hard to get right for anything that's not completely trivial, and I find that it's almost always better to figure out a way to do work without directly sharing memory if possible (or do some kind of borrow/ownership thing like Rust to make it unambiguous who actually owns it). Mutexes can also make it difficult to performance test in my experience, since there can be weird choke points that don't show up in local testing and only ever show up in production.
Part of the reason I love Erlang so much is specifically because it really doesn't easily allow you to share memory. Everything is segmented and everything needs to be message-passed, so you aren't mucking around with mutexes and it's never ambiguous where memory is who who owns it. Erlang isn't the fastest language but since I'm not really dealing with locks the performance is generally much more deterministic for me.