Discussion
Search code, repositories, users, issues, pull requests...
harshdoesdev: its a really innovative idea! very interested in the subsecond coldstart claim, how does it achieve that?
fqiao: @binsquare basically brute-force trimmed down unnecessary linux kernel modules, tried to get the vm started with just bare minimum. There are more rooms for improvement for sure. We will keep trying!
cr125rider: Great job with the comparison table. Immediately I was like “neat sounds like firecracker” then saw your table to see where it was similar and different. Easy!Nice job! This looks really cool
binsquare: Hello, I'm building a replacement for docker containers with a virtual machine with the ergonomics of containers + subsecond start times.I worked in AWS previously in the container space + with firecracker. I realized the container is an unnecessary layer that slowed things down + firecracker was a technology designed for AWS org structure + usecase.So I ended up building a hybrid taking the best of containers with the best of firecracker.Let me know your thoughts, thanks!
harshdoesdev: +1. i built something similar called shuru.run because i wanted an easy way to set up microVM sandboxes to run some of my AI apps, and firecracker wasn't available for macOS (and, as you said, it is just too heavy for normal user-level workloads).
fqiao: Yes, having a light-weight solution for local devices as well is one primary goal of the design. Another one is to make it easy for hosting, self or managed
harshdoesdev: nice! for most local workloads, it is actually sufficient. so, do you ship a complete disk snapshot of the machines?
fqiao: Yes. files on the disks are kept across stop and restart. We also have a pack command to compress the machine as a single file so that it can shipped and rehydrated elsewhere
thm: You could add OrbStack to the comp. table
fqiao: Will do. Thanks for the suggestion!
deivid: With this approach I managed to get to sub-10ms start (to pid1), if you can accept a few constraints there's plenty of room!Though my version was only tested on Linux hosts
sdrinf: hi, great project! Windows support is sorely lacking, though. As someone working a lot with sandboxed LLMs right now, the options-space on windows for sandboxing is _extremely lacking_. Any plans to support it?
fqiao: Hey, thanks so much! yah we will definitely add windows support later. We are exploring how to get this work with WSL and will release it asap. Stay tuned and thanks!
binsquare: Yeah, it's in my mind.WSL2 runs a linux virtual machine. Need to take some time and care to wire that up, but definitely feasible.
0cf8612b2e1e: This looks very cool. Does the VM machinery still work if I run it in a bubblewrap? Can it talk to a GPU?Can you pipe into one? It would be cute if I could wget in machine 1 and send that result to offline machine 2 for processing.
binsquare: Haven't tried with bubblewrap - but it should.Yes! GPU passthrough is being actively worked on and will land in next major release: https://github.com/smol-machines/smolvm/pull/96Yea just tried piping, it works:``` smolvm machine exec --name m1 -- wget -qO- https://example.com/data.csv \ | smolvm machine exec --name m2 -i -- python3 process.py ```
gavinray: The feature that lets you create self-contained binaries seems like a potentially simpler way to package JVM apps than GraalVM Native.Probably a lot of other neat usecases for this, too smolvm pack create --image python:3.12-alpine -o ./python312 ./python312 run -- python3 --version # Python 3.12.x — isolated, no pyenv/venv/conda needed
binsquare: would be interested to see how you do it, how can I connect with you - emotionally?
sahil-shubham: Nice work on Shuru — I remember looking at it when I was researching this space. You went with a Rust wrapper on Apple’s Virtualization framework right?I have been working on something similar but on top of firecracker, called it bhatti (https://github.com/sahil-shubham/bhatti).I believe anyone with a spare linux box should be able to carve it into isolated programmable machines, without having to worry about provisioning them or their lifecycle.The documentation’s still early but I have been using it for orchestrating parallel work (with deploy previews), offloading browser automation for my agents etc. An auction bought heztner server is serving me quite well :)
harshdoesdev: bhatti's cli looks very ergonomic! great job!also, yes, shuru was (still) a wrapper over the Virtualization.framework, but it now supports Linux too (wrapper over KVM lol)
lambdanodecore: Basically any open source project nowadays run their software stack in containers often requiring docker compose. Unfortunatley Smol machines do not support Docker inside the microvms and they also do not support nested VMs for things that use Vagrant. I think this is a big drawback.
binsquare: I can support docker - will ship a compatible kernel with the necessary flags in the next release.
PufPufPuf: Hey this is super cool. I've been researching tech like this for my AI sandboxing solution, ended up with Lima+Incus: https://github.com/JanPokorny/lockiMy problem with microVMs was that they usually won't run docker / kubernetes, I work on apps that consist of whole kubernetes clusters and want the sandbox to contain all that.Does your solution support running k3s for example?
bch: see too[0][1] for projects of a similar* vein, incl historical account.*yes, FreeBSD is specifically developed against Firecracker which is specifically avoided w "Smol machines", but interesting nonetheless[0] https://github.com/NetBSDfr/smolBSD[1] https://www.usenix.org/publications/loginonline/freebsd-fire...
binsquare: that was one of my inspirations but I don't think they went far enough in innovation.microvm space is still underserved.
bch: > that was one of my inspirationsColins FreeBSD work or Emiles NetBSD work?