https://erlangforums.com/t/hornbeam-wsgi-asgi-server-for-run... https://github.com/benoitc/hornbeam
It's a very different approach than ex_cmd, as it's not really focused on the "streaming data" use case. Mine is a very command/reply oriented approach, though the commands can flow both ways (calling BEAM modules from Python). The assumption is that big data is passed around out of band; I may have to revisit that.
You may run into some issues with Docker and native deps once you get to production. Don’t forget to cache the bumblebee files.
At my work we run a fairly large webshop and have a ridiculous number of jobs running at all times. At this point most are running in Sidekiq, but a sizeable portion remain in Resque simply because it does just that, start a process.
Resque workers start by creating a fork, and that becomes the actual worker.
So when you allocate half your available RAM for the job, its all discarded and returned to the OS, which is FANTASTIC.
Sidekiq, and most job queues uses threads which is great, but all RAM allocated to the process stays allocated, and generally unused. Especially if you're using malloc it's especially bad. We used jemalloc for a while which helped since it allocates memory better for multithreaded applications, but easiest is to just create a process.
I don't know how memory intensive ML is, what generally screwed us over was image processing (ImageMagick and its many memory leaks) and... large CSV files. Yeah come to think of it, you made an excellent architectural choice.