upvote
Thanks for the pointer! Love the premise project. Just a few notes:

- a security focused project should NOT default to train people installing by piping to bash. If i try previewing the install script in the browser it forces download instead of showing as plain text. The first thing i see is an argument

# --prefix DIR Install to DIR (default: ~/.smolvm)

that later in the script is rm -rf deleting a lib folder. So if i accidentally pick a folder with ANY lib folder this will be deleted.

- Im not sure what the comparison to colima with krunkit machines is except you don't use vm images but how this works or how it is better is not 100% clear

- Just a minor thing but people don't have much attention and i just saw aws and fly.io in the description and nearly closed the project. it needs to be simpler to see this is a local sandbox with libkrun NOT a wrapper for a remote sandbox like so many of the projects out there.

Will try reaching you on some channel, would love to collaborate especially on devX, i would be very interested in something more reliable and bit more lightweight in placce of colima when libkrun can fully replace vz

reply
Love this feedback, agree with you completely on all of it - I'll be making those changes.

1. In comparison with colima with krunkit, I ship smolvm with custom built kernel + rootfs, with a focus on the virtual machine as opposed to running containers (though I enable running containers inside it).

The customizations are also opensource here: https://github.com/smol-machines/libkrunfw

2. Good call on that description!

I've reached out to you on linkedin

reply
What is the alternative to bash piping? If you don't trust the project install script, why would you trust the project itself? You can put malware in either.
reply
That assumes you even need an install script. 90% of install scripts just check the platform and make the binary executable and put it in the right place. Just give me links to a github release page with immutable releases enabled and pure binaries. I download the binary but it in a temporary folder, run it with a seatbelt profile that logs what it does. Binaries should "just run" and at most access one folder in a place they show you and that is configurable! Fuck installers.
reply
It turns out that it's possible for the server to detect whether it is running via "| bash" or if it's just being downloaded. Inspecting it via download and then running that specific download is safer than sending it directly to bash, even if you download it and inspect it before redownloading it and piping it to a shell.
reply
The server can also put malware in the .tar.gz. Are you really checking all the files in there, even the binaries? If you don't what's the point of checking only the install script?
reply
> Are you really checking all the files in there, even the binaries?

One should never trust the binaries, always build them from source, all the way down to the bootloader.

https://bootstrappable.org/

Checking all the files is really the only way to deal with potential malware, or even security vulns.

https://github.com/crev-dev/

reply
Nice ideal, but Chrome/Firefox would take days to build on your average laptop (if it doesn't run out of memory first).
reply
The latest Firefox build that Debian did only took just over one hour on amd64/armhf and 1.5 hours on ppc64el, the slowest Debian architecture is riscv64 and the last successful build there took only 17.5h, so definitely not days. Your average modern developer-class laptop is going to take a lot less than riscv64 too.
reply
> If you don't what's the point of checking only the install script?

The .tar.gz can be checksummed and saved (to be sure later on that you install the same .tar.gz and to be sure it's still got the same checksum). Piping to Bash in one go not so much. Once you intercept the .tar.gz, you can both reproduce the exploit if there's any (it's too late for the exploit to hide: you've got the .tar.gz and you may have saved it already to an append-only system, for example) and you can verify the checksum of the .tar.gz with other people.

The point of doing all these verifications is not only to not get an exploit: it's also to be able to reproduce an exploit if there's one.

There's a reason, say, packages in Debian are nearly all both reproducible and signed.

And there's a reason they're not shipped with piping to bash.

Other projects shall offer an install script that downloads a file but verifies its checksum. That's the case of the Clojure installer for example: if verifies the .jar. Now I know what you're going to say: "but the .jar could be backdoored if the site got hacked, for both the checksum in the script and the .jar could have been modified". Yes. But it's also signed with GPG. And I do religiously verify that the "file inside the script" does have a valid signature when it has one. And if suddenly the signing key changed, this rings alarms bells.

Why settle for the lowest common denominator security-wise? Because Anthropic (I pay my subscription btw) gives a very bad example and relies entirety on the security of its website and pipes to Bash? This is high-level suckage. A company should know better and should sign the files it ships and not encourage lame practices.

Once again: all these projects that suck security-wise are systematically built on the shoulders of giants (like Debian) who know what they're doing and who are taking security seriously.

This "malware exists so piping to bash is cromulent" mindset really needs to die. That mentality is the reason we get major security exploits daily.

reply
> And I do religiously verify that the "file inside the script" does have a valid signature when it has one.

If you want to go down this route, there is no need to reinvent the wheel. You can add custom repositories to apt/..., you only need to do this once and verify the repo key, and then you get this automatic verification and installation infrastructure. Of course, not every project has one.

reply
Probably on the side of your project, but did you try SmolBSD? <https://smolbsd.org> It's a meta-OS for microVMs that boots in 10–15 ms.

It can be dedicated to a single service (or a full OS), runs a real BSD kernel, and provides strong isolation.

Overall, it fits into the "VM is the new container" vision.

Disclaimer: I'm following iMil through his twitch streams (the developer of smolBSD and a contributor to NetBSD) and I truly love what he his doing. I haven't actually used smolBSD in production myself since I don't have a need for it (but I participated in his live streams by installing and running his previews), and my answer might be somewhat off-topic.

More here <https://hn.algolia.com/?q=smolbsd>

reply
First time hearing about it, thanks for sharing!

At a glance, it's a matter of compatibility, most software has first class support for linux. But very interesting work and I'm going to follow it closely

reply
What would the advantage of this be compared to using something like a Firecracker backend for containerd?
reply
Run locally on macs, much easier to install/use, and designed to be "portable" meaning you can package a VM to preserve statefulness and run it somewhere else.

worked in AWS and specifically with firecracker in the container space for 4 years - we had a very long onboarding doc to dev on firecracker for containers... So I made sure to focus on ease of use here.

reply
firecracker does not run on macos and has no GPU support
reply
It looks like you may be interested in Qubes OS, https://qubes-os.org.
reply