upvote
Hey this is super cool. I've been researching tech like this for my AI sandboxing solution, ended up with Lima+Incus: https://github.com/JanPokorny/locki

My problem with microVMs was that they usually won't run docker / kubernetes, I work on apps that consist of whole kubernetes clusters and want the sandbox to contain all that.

Does your solution support running k3s for example?

reply
we will evaluate. I created this issue to track this: https://github.com/smol-machines/smolvm/issues/150

Really appreciate the feedback!

reply
What is the status of supporting live migration?

That's the one feature of similar systems that always gets left out. I understand why: it's not a priority for "cloud native" workloads. The world, however, has work loads that are not cloud native, because that comes at a high cost, and it always will. So if you'd like a real value-add differentiator for your micro-VM platform (beyond what I believe you already have,) there you go.

Otherwise this looks pretty compelling.

reply
It helps if you offer a concrete use case, as in how large the heap is, what kinda of blackout period you can handle, and whether the app can handle all of it's open connections being destroyed, etc. The more an app can handle resetting some of it's own state, the easier LM is going to be to implement. If your workload jives with CRIU https://github.com/checkpoint-restore/criu you could do this already.

By what I assume is your definition, there are plenty of "non cloud native" workloads running on clouds that need live migration. Azure and GCP use LM behind the scenes to give the illusion of long uptime hosts. Guest VMs are moved around for host maintenance.

reply
"Azure and GCP use LM behind the scenes"

As does OCI, and (relatively recently) AWS. That's a lot of votes.

Use case: some legacy database VM needs to move because the host needs maintenance, the database storage (as opposed to the database software) is on a iSCSI/NFS/NVMe-oF array somewhere, and clients are just smart enough to transparently handle a brief disconnect/reconnect (which is built-in to essentially every such database connection pool stack today.)

Use case: a web app platform (node/spring/django/rails/whatever) with a bunch of cached client state needs to move because the host needs maintenance. The developers haven't done all the legwork to make the state survive restart, and they'll likely never get time needed to do that. That's essentially the same use case as previous. It's also rampant.

Use case: a long running batch process (training, etc.) needs to move because reasons, and ops can't wait for it to stop, and they can't kill it because time==money. It's doesn't matter that it takes an hour to move because big heap, as long as the previous 100 hours isn't lost.

"as in how large the heap is"

That's an undecidable moving target, so let the user worry about it. Trust them to figure out what is feasible given the capabilities of their hardware and talent. They'll do fine if you provide the mechanism. I've been shuffling live VMs between hosts for 10+ years successfully, and Qemu/KVM has been capable of it for nearly 20, never mind VMware.

"CRIU"

Dormant, and still containers. Also, it's re-solving solved problems once you're running in a VM, but with more steps.

reply
Really appreciate the suggestion! By "live migration", do you mean keeping the existing files and migrate them elsewhere with the vm?

Thanks

reply
I mean making any given VM stop on host A and appear on host B; e.g. standard Qemu/KVM:

    virsh migrate --live GuestName DestinationURL
This is feasible when network storage is available and useful when a host needs to be drained for maintenance.
reply
Live migrations and the tech powering it was the hardest thing I ever built. Its something that I think will come naturally to projects like smolVM as more of the hypervisors build it in, but its a deeply challenging task to do in userspace.

My team spent 4 months on our implementation of vm memory that let us do it and its still our biggest time suck. We also were able to make assumptions like RDMA that are not available.

All that to say — as someone not working on smolVMs — I am confident smolVMs and most other OSS sandbox implementations will get live migration via hypervisor upgrades in the next 12 months.

Until then there are enterprise-y providers like that have it and great OSS options that already solve this like cloud hypervisor.

reply
I see. so right now smolvm can be stopped, and then "packed" (think of it as compressed), and restart on a different host. files in the disks are preserved, but memory snapshotting is still hard tbh
reply
It's also feasible without network storage, --copy-storage-all will migrate all disks too.
reply
What percentage of this code was written by LLM/AI?
reply
For myself, I'd estimate ~50%

Not useful for things it hadn't been trained on before. But now I have the core functionality in place - it's been of great help.

reply
Hey mathematician, how much of this formula did you calculate with an abacus instead of a calculator?
reply
Hey 'software engineer', how much of the output of an LLM it's actually reproducible vs the one from a calculator or any programming language with the same input in different sessions?
reply
Why are you so concerned about the LLM producing the exact same code across different sessions? Seems like a really weird thing to focus on. Why aren't you focused on things like security, maintainability, UI/UX, performance?
reply
The images or rather portable artifacts rehydration on any platform plus the packaging is neat. I have been working on https://instavm.io for some time around VM based sandboxes and related infra for agents and this is refreshing to see.
reply
+1. i built something similar called shuru.run because i wanted an easy way to set up microVM sandboxes to run some of my AI apps, and firecracker wasn't available for macOS (and, as you said, it is just too heavy for normal user-level workloads).
reply
Nice work on Shuru — I remember looking at it when I was researching this space. You went with a Rust wrapper on Apple’s Virtualization framework right?

I have been working on something similar but on top of firecracker, called it bhatti (https://github.com/sahil-shubham/bhatti).

I believe anyone with a spare linux box should be able to carve it into isolated programmable machines, without having to worry about provisioning them or their lifecycle.

The documentation’s still early but I have been using it for orchestrating parallel work (with deploy previews), offloading browser automation for my agents etc. An auction bought heztner server is serving me quite well :)

reply
bhatti's cli looks very ergonomic! great job!

also, yes, shuru was (still) a wrapper over the Virtualization.framework, but it now supports Linux too (wrapper over KVM lol)

reply
Yes, having a light-weight solution for local devices as well is one primary goal of the design. Another one is to make it easy for hosting, self or managed
reply
You could add OrbStack to the comp. table
reply
Will do. Thanks for the suggestion!
reply
What were the biggest challenges in terms of designing the VM to have subsecond start times? And what are the current bottlenecks for deceasing the start time even further?
reply
No special programming tricks were used.

Linux was built in the 90s. Hardware improved more than a 1000x. Linux virtual machine startup times stayed relatively the same.

Turns out we kept adding junk to the linux kernel + bootup operations.

So all I did was cut and remove unnecessary parts until it still worked.

This ended up also getting boot up times to under 1s. The kernel changes are the 10 commits I made, you can verify here: https://github.com/smol-machines/libkrunfw

There's probably more fat to cut to be honest.

reply
hi, great project! Windows support is sorely lacking, though. As someone working a lot with sandboxed LLMs right now, the options-space on windows for sandboxing is _extremely lacking_. Any plans to support it?
reply
Hey, thanks so much! yah we will definitely add windows support later. We are exploring how to get this work with WSL and will release it asap. Stay tuned and thanks!
reply
Yeah, it's in my mind.

WSL2 runs a linux virtual machine. Need to take some time and care to wire that up, but definitely feasible.

reply