Well, before Docker I used to work on Xen and that possible future of massive block devices assembled using Vagrant and Packer has thankfully been avoided...
One thing that's hard to capture in the article -- but that permeated the early Dockercons -- is the (positive) disruption Docker had in how IT shops were run. Before that going to production was a giant effort, and 'shipping your filesystem' quickly was such a change in how people approached their work. We had so many people come up to us grateful that they could suddenly build services more quickly and get them into the hands of users without having to seek permission slips signed in triplicate.
We're seeing the another seismic cultural shift now with coding agents, but I think Docker had a similar impact back then, and it was a really fun community spirit. Less so today with the giant hyperscalars all dominating, sadly, but I'll keep my fond memories :-)
Funny comment considering lightweight/micro-VMs built with tools like Packer are what some in the industry are moving towards.
Some of those talks strangely make more sense today (e.g. Rump Kernels or unikernels + coding agents seems like a really good combination, as the agent could search all the way through the kernel layers as well).
"Ship your machine to production" isn't so bad when you have a ten-line script to recreate the machine at the push of a button.
Wonder when some enterprising OSS dev will rebrand dynamic linking in the future...
I don't care about glibc or compatibility with /etc/nsswitch.conf.
look at the hack rust does because it uses libc:
> pub unsafe fn set_var<K: AsRef<OsStr>, V: AsRef<OsStr>>(key: K, value: V)
So what do you do when you need to resolve system users? I sure hope you don't parse /etc/passwd, since plenty of users (me included) use other user databases (e.g. sssd or systemd-userdbd).
I think it’s laziness, not difficulty. That’s not meant to be snide or glib: I think gaining expertise in how to package and deploy non-containerized applications isn’t difficult or unattainable for most engineers; rather, it’s tedious and specialized work to gain that expertise, and Docker allowed much of the field to skip doing it.
That’s not good or bad per se, but I do think it’s different from “pre-container deployment was hard”. Pre-container deployment was neglected and not widely recognized as a specialty that needed to be cultivated, so most shops sucked at it. That’s not the same as “hard”.
I sort of had the problem in mind. Docker is the answer. Not clever enough to have inventer it.
If I did I would probably have invented octopus deploy as I was a Microsoft/.NET guy.
Minus the kernel of course. What is one to do for workloads requiring special kernel features or modules?
Good luck convincing people to switch!
Using it, solving problems with it, and building a real community around it tend to make a much greater impact in the long run.
Absolutely not. Nix and Guix are package managers that (very simplified) model the build process of software as pure functions mapping dependencies and source code as inputs to a resulting build as their output. Docker is something entirely different.
> they’re both still throwing in the towel on deploying directly on the underlying OS’s userland
The existence of an underlying OS userland _is_ the disaster. You can't build a robust package management system on a shaky foundation, if nix or guix were to use anything from the host OS their packaging model would fundamentally break.
> unless you go all the way to nixOS
NixOS does not have a "traditional/standard/global" OS userland on which anything could be deployed (excluding /bin/sh for simplicity). A package installed with nix on NixOS is identical to the same package being installed on a non-NixOS system (modulo system architecture).
> shipping what amounts to a filesystem in a box
No. Docker ships a "filesystem in a box", i.e. an opaque blob, an image. Nix and Guix ship the package definitions from which they derive what they need to have populated in their respective stores, and either build those required packages or download pre-built ones from somewhere else, depending on configuration and availability.
With docker two independent images share nothing, except maybe some base layer, if they happen to use the same one. With nix or Guix, packages automatically share their dependencies iff it is the same dependency. The thing is: if one package depends on lib foo compiled with -O2 and the other one depends on lib foo compiled with -O3, then those are two different dependencies. This nuance is something that only the nix model started to capture at all.
That means unlike Gentoo, I've never dealt with a "slot conflict" where two packages want conflicting dependencies. And unlike Ubuntu, I have new versions of everything.
Pick 2: share dependencies, be on the bleeding edge, or waste your time resolving conflicts.
If you have adopted a bad tool then people are likely to want the bad tool in more places. This is the opposite of a virtuous cycle and is a horrible form of tech debt.