Get the top HN stories in your inbox every day.
noufalibrahim
nonrandomstring
Brings back lovely memories of Knoppix days. That's more or less how we used to mod CD based distros, mounted as looped-back filesystems.
Maybe another missing link in the continuum between simple chroot and full VMs is User Mode Linux (UML). Whatever happened to that? Didn't it get folded into Linux kernel as a standard thing at some point? Why do we never hear much of it now?
flyinghamster
I haven't really kept up with it, but I didn't find it in a cursory check of 6.0.7. Linode used UML when they were first starting, before eventually moving to Xen and then KVM.
mcculley
It is still there: https://github.com/torvalds/linux/tree/master/arch/um
xmodem
There's now also gVisor
goodpoint
> I cooked up this idea of creating a dummy file (dd), formatting it into a file system, mounting it as a loopback device and deb-bootstrapping it.
This has been used for 2 decades. It's extremely common for embedded systems.
noufalibrahim
I have very little embedded experience. I think I got the general approach from a python script shipped with Ubuntu to create bootable thumb drives but I might be misremembering.
pjmlp
On HP-UX we would just use vaults, already in 1999.
hinkley
Half of the things we front page are retreads of minicomputer and supercomputer tech, some of it as old as the late 1980’s. I’m wondering what will happen and what sort of retrospectives will occur when the people involved spend enough time researching existing techniques and discover they’re covering old ground.
pjmlp
It is going to be the equivalent of rediscovering ancient Roman, Greek, Egyptian, Babylonian, Maya, Aztecs,... technology.
ilyt
We did that for some web hosting client it worked but managing this wasn't exactly pleasant. Then again there isn't much managing if instead of upgrading the system you rebuild it from scratch every time.
noufalibrahim
Yup. That's the reason why we did this.
yjftsjthsd-h
What was the advantage to creating a filesystem in a disk image rather than just extracting a root tarball somewhere and chrooting into it?
coldacid
The filesystem-in-a-file was to produce the contents of the root tarball that was extracted somewhere (in /opt per OP) and chrooted into.
noufalibrahim
Yes. It was also the first time I used makeself which I quite liked.
akadempythag
Fun deep dive. In a previous life I wrote web apps, and I liked to develop on Chromebooks to make sure that everything ran smoothly on low-end machines.
I would set a price point of $2-300, which usually meant I was lucky to get a Rockchip. And this was before it was popular to build for ARM architectures.
And yet, ChromeOS' chroot-based projects like crosh always managed to deliver. Surrounded by aluminum MacBooks and carbon-fiber Dells, I would run Rails apps and X GUIs and source builds on a rubberized hunk of plastic that had been designed for use in schools.
I have always been surprised at how well that worked. If it was using the same fundamentals as Docker and Podman, I'm not surprised that the containerization movement has enjoyed such popularity.
j0hnyl
Likewise it's amazing what one can do with Termux on an Android device.
spiffytech
There was something really nice about developing on a Chromebook. It felt simplified in a very pleasant way.
I ultimately discontinued the practice because beefy Chromebooks never become commonplace, some points of development friction got tiresome, and I stopped preferring Chrome.
Yet I'm nostalgic for the experience.
nunez
Yes, containers are more than chroot, until you:
- Want to give them their own IP addresses or networks, or
- Put upper bounds on their resources, or
- Get tired of dealing with chroot and unshare and seccomp and probably other tools I'm forgetting, or
- You want to run an arm64 container on an x86 host with minimal configuration, or...
this is a really fun (and important) exercise for anyone working with containers seriously to undergo but let's not trivialize how insanely easy Docked made creating containers become.
laumars
Making containers was easy long before Docker came along:
- FreeBSD Jails
- Solaris Zones
- Proxmox (which was an abstraction over OpenVZ, back before LXC came along)
In fact because of all of the above, I was a latecomer to Docker and didn't understand the appeal.
What Docker changed was that it made containers "sexy", likely due to the git-like functionality. It took containers from a sysadmin world and into a developers world. But it certainly didn't make containers any easier in the process.
tiagod
It did make it easier, at least the barrier to entry. I remember reading about jails years ago, when I had a lot less sysadmin knowledge, and I couldn't wrap my head around it.
With docker, many people still can't wrap their head around how it works and will do stupid things if they need to run them in a serious environment, but they can still run a bunch of containers to run some hard to install software easily on their local machine!
Sure, jails were easy in some ways, but boiling docker's success to sexyness, instead of usefulness, sounds a bit like yet another "Dropbox is just rsync". Docker wasn't solving the isolation issue (which had been obviously solved for years) but mostly the distribution issue.
BiteCode_dev
So you mean you could take your FreeBSD Jails configuration, upload it on a well known public website like dockerhub, then get someone on Windows or Mac transparently install the image and run it with a few cmdlines ?
Because docker container are called container for this reason. That comes from the boat container analogy.
clcaev
Many years before Linux containers, FreeBSD jails were easily packaged up via tar, deployed via scp, and started with minimal script. There wasn't much hype, it just worked. It was an excellent software packaging and distribution tool.
There wasn't a hub that I recall, nor was there tooling to use VMs so they could run on Windows/Mac. However, the main challenge, being able to distribute an "image" without requiring VM overhead, was solved elegantly. It just wasn't Linux, so it didn't make news.
laumars
Running Docker on non-Linux platforms requires a Linux VM to run in the background. It's not as cross platform as people make out. The other container technologies can be managed via code too, that code can be shared to public sites. And you can run those containers in a VM too, if you'd want.
What's more, with ZFS you could not only ship the container as code, but even the container as a binary snapshot, very much like Docker's push/pull but predating Docker. Even on Linux, for a long time before Docker, you could ship container snapshots as tarballs.
Also worth mentioning is that early versions of Proxmox even had (and likely still does) a friendly GUI to select which image you wanted to download, thus making even the task of image selection user friendly. This was more than 10 years ago, long before Docker's first release and something Docker Desktop still doesn't even have to this day.
> Because docker container are called container for this reason. That comes from the boat container analogy.
The term computer "container" predates Docker by a great many years. Containers were in widespread use on other UNIXes with Linux being late to the game. It was one of the reasons I preferred to run FreeBSD or Solaris on production in the 00s despite Linux being my desktop OS of choice. Even when Linux finally caught with containerisation, Docker was still a latecomer to Linux.
Furthermore, for a long time Docker wasn't even containers (still isn't strictly that now but it at least offers more in the way of process segregation than the original implementations did). Albeit this was a limitation of the mainline Linux kernel so I don't blame Docker for that. Whereas FreeBSD Jails and Solaris Zones offered much more separation, even at a network level.
If we are being picky about the term "container" (not something I normally like to do) then Docker is the least "container"-like of all containerisation technologies available. But honestly, I don't like to get hung up on jargon because it helps no-one. I only raise this because you credited the term to Docker.
---
Now to be clear, I don't hate Docker. It may have it's flaws but there are aspects of it I do also really like; and thus I do use it regularly on my Linux hosts these days despite my original reluctance to move away from Jails (Jails is still much nicer if you need to do anything complicated with networking, but Docker is "good enough" for most cases). However what I really dislike is this rewriting of history where people seem to think Docker stood out as a better designed technology - either from a UX or engineering perspective.
I personally think what made Docker successful was being in the right place at the right time. Linux was already a popular platform, containers were beginning to become widely known outside of the sysadmin circles but Linux (at that time) still sucked for containerisation. So it got enough hype early on to generate the snowball effect that saw it become dominant. But lets also not forget just how unstable it was, for a long time it was frequently plagued with regression bugs from one release to another. Which caused a great many sysadmins to groan whenever a new release landed.
(sorry for the edits, the original post was a flow of thoughts without much consideration to readability. Hopefully I've tidied it up)
ajross
Jails and Zones are kernel mechanisms, they aren't easy any more than cgroups/namespaces are easy (to be fair, yes, they're easier than linux's tools by default, albeit less flexible). What docker changed was absolutely to make things "easy". A Dockerfile is really no more than a shell script, the docker command line is straightforward, the docker registry is filled with good stuff ready for use.
pjmlp
HP-UX vaults, introduced around 1999.
lobotron
Yes: "What Docker changed was that it made containers "sexy", likely due to the git-like functionality.
Post cloud era and all those new markets to tap into. Jails etc: like a tool hammering nails. Why, when or how to build a city is something else entirely.
-but, probably git-like sexy functionality indeed. yeeha
nine_k
Frankly, you don't need much more for a proof of concept. Beside chroot, you basically need a namespace control utility, an iptables control utility, and a volume control utility.
The "Docker in 100 lines of bash" [1] does just that. It skips the trouble of managing volumes by demanding btrfs; it could instead use stuff like tar / dd and mount to a comparable effect.
This is, of course, about as fiddly, slow, and unreliable as, say, building an amplifier with discrete transistors, or building a clock with nixie tubes and 74xx chips, etc. The point is not in producing a sleek and reliable solution (though this is not ruled out), but in seeing and understanding how a thing works from inside.
gsaussy
Also, if you choose not to use the `iptables` command line utility and want to handle network isolation by directly chatting with the kernel over a netlink socket, then you basically have to cargo cult someone else's implementation.
AFAIK the clearest documentation has been reading the Docker networking code.
simplotek
> (...) but let's not trivialize how insanely easy Docked made creating containers become.
So much this. Docker doesn't get the respect it deserves. I'd also mention packaging/installing/deploying applications, which Docker made so trivial and trouble-free.
Docker at it's core is a masterclass of UX.
nine_k
Its UX used to be... slightly suboptimal here and there. Nevertheless, it took the world by storm.
IMHO it's because Docker made the few core things absurdly easy, not just low-friction, but zero-friction. That was key. The rest was important but not critical, and was built out eventually.
xorcist
That list are good examples of things Docker makes unnecessarily hard.
Anything else than what Docker provides is out of the question, and what that is is often undocumented and changes between versions. It will also clash with whatever limits and netfilter rules the system uses anyway unless you are very careful.
This is actually one of the things that systemd did well. It's completely straightforward and the man pages are mostly correct. They should have just gone with that instead.
Timon3
Exactly, Docker is great - until you leave the beaten path. At a previous workplace I set up a VPN solution around Wireguard and Docker. We wanted to start off by integrating one physical box into our network, where containers would be spawned with a client and a customized MAC addresses to sort them into individual VLANs, which would then be accessible through the VPN.
It took a number of tries to set up correctly, especially since documentation on these areas mostly consists of reading through various issues on the Docker issue tracker. Some necessary features weren't even supported in the current Docker-Compose file versions. And best of all - 3 clients worked without a problem in parallel, any further clients were not visible in the network. No error logs or anything, just no network.
Of course this wasn't using everyday features, but it would have been nice to have a bit more of an introductory guide into the subsystem. This way it felt like I was fighting against Docker more than it was helping me.
nunez
Docker networking complicates a lot of stuff if you're using bridges and overlays, especially WireGuard, which wholly depends on UDP and is insanely unfriendly once NATs come into play. Using --net=host or configuring a macvlan network to give the containers real IPs usually helps.
Also doesn't help that linuxserver/wireguard's docs are basically like "Oh, yeah, if you use this for connecting to your VPN from outside of your house (which, like, 99.95% of people installing WireGuard are trying to do), routing might be an issue and is left as an exercise for the reader."
(Funny enough, Tailscale is to WireGuard what Docker is to containers. They recognized that wg is amazing but amazingly complicated to get going with, especially through NAT/CGNAT, so they drastically simplified that, added amazing UX on top of it, and are raking in that VC cash. Can't wait to read "hurr durr tailscale dumb, wg-quick amirite" in like five years after Tailscale is a multi-billion dollar networking juggernaut)
simplotek
> Some necessary features weren't even supported in the current Docker-Compose file versions.
Why docker-compose?
Have you tried Docker swarm mode?
tyingq
Bocker[1] does a reasonably good job of showing the value of Docker was mostly in Docker hub.
goodpoint
docker replaced small scripts with a bloated codebase and a big attack surface.
hinkley
Docker saved me from clever people who overlapped with the “it works on my machine” crowd.
Regenerating from scratch every time saved a bunch of tribal knowledge from staying locked up in the heads of untrustworthy individuals. You couldn’t “forget” changes you made last Friday that don’t compose with the documented parts of the process.
goodpoint
This is like attributing the invention of phones to steve jobs and electric cars to elon musk.
And I wouldn't be too surprised if somebody did it on HN.
herodoturtle
This article was fun to read. It gives a nice history and overview of chroot.
Irrespective of whether one agrees with the claim in the title, it’s an informative read.
If you’re scanning through these comments, and know very little about chroot, I’d recommend you check this out.
[Edit] The previous linkbaity title was (thankfully) changed by dang.
notpushkin
Surprised no one has mentioned Bocker yet – “Docker implemented in around 100 lines of bash”. [1, 2]
[1]: https://github.com/p8952/bocker/
[2]: https://news.ycombinator.com/item?id=33218094 (116 comments)
john-tells-all
People tend to think of Docker as big and complex and magic. Bocker lets us take a step back and not take Docker too seriously :)
Someone1234
Wouldn't it be more apt to say they're "chroot delivered via a package management system?"
My big gripe with containers is how insecure many of them are, and how ignored the problem is because it is "someone else's problem." But even I won't pretend that taking a powerful tool like chroot (and or cgroups, etc) and then packaging them for rapid deployment/rollbacks/etc isn't key to their popularity.
asicsp
klyrs
> I pronounce chroot as change root.
The real hot take
tambourine_man
Actually, it’s pronounced “charuto”, for cigar in Portuguese.
kirubakaran
Originally from Tamil https://en.wikipedia.org/wiki/Cheroot
BiteCode_dev
I am chroot.
cratermoon
> for me, containers are just chrooted processes. Sure, they are more than that: Containers have a nice developer experience, an open-source foundation, and a whole ecosystem of cloud-native companies pushing them forward"
"All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?"
zoobab
1979 is the best year!
chroot was written by one of the founders of Sun Microsystems, a cool guy:
https://en.wikipedia.org/wiki/Bill_Joy https://myhero.com/B_Joy_cvhs_cl_US_2016_ul
adamgordonbell
Wikipedia does say Bill Joy wrote chroot, but the unix history in the article says that it was DMR or Ken Thompson depending on which file you look at and chroot is listed in the V7 manual. So I think it predates the BSD Kernel.
jlokier
The BSD kernel is actually older then Unix V7, so it could plausibly have been added to BSD first.
keithnz
I quite enjoyed Liz Rices video on containers... though I'm not quite sure which one I originally watched and she seems to have done the talk quite a few times ( and updated it ). But here's one https://www.youtube.com/watch?v=oSlheqvaRso
whalesalad
I like containers, but I absolutely love chroot and debootstrap. It’s killer for when you need isolation (like building software locally without ruining the host OS) but don’t need the pomp and circumstance of containers.
mikepurvis
I've long been a fan of basic debootstrap->chroot, though I will say in recent times systemd-nspawn and then buildah have definitely pretty much displaced chroot in my toolbox. They're equally pomp-free relative to Docker, but have a bunch of nice affordances in terms of a properly set up network including DNS and hosts, handling of filesystem permissions, and correctly presenting a read-only /proc.
quickthrower2
If you are trying to tilt me over to always use linux instead of windows for all things you are doing a damn fine job.
My big pain in any OS is messing it up by installing the wrong thing or the wrong way. But at the same time I want to play with different languages.
I mean why use nvm just for node let alone anything else, when I can do this, it feels way cleaner.
whalesalad
I’ve been meaning to grok nspawn - for my recent use cases the filesystem was the only level of isolation required but I’m gonna check it out for sure. Glad to hear it’s as good as I’ve heard it is.
Get the top HN stories in your inbox every day.
Back before the birth of docker when I was consulting, a friends startup wanted a way to install their multi component application (web service, database etc.) inside a corporate environment. They built it on a Debian setup and generally apt-getted (apt-got?) all their requirements, used supervisord to manage the backend process (written in Python) and had it running. To do this on prem, they needed an installer.
I cooked up this idea of creating a dummy file (dd), formatting it into a file system, mounting it as a loopback device and deb-bootstrapping it. Then I chrooted inside it, installed all the dependencies needed, added all the code and things we needed for the application, left the chroot and use makeself to create a self extracting zip file of the whole application along with an initial config dialogue. We wrapped up the process using a Python script so that people create such "installers".
Once you ran the file, it would unzip itself into /opt/{something} and start up all the processes. Inside the chroot was Debian with a db and several other things running. Outside the chroot was Redhat or whatever. It ran surprisingly well and was used for a couple of years for all their customers even after Docker came out. I rejoined them as a contractor after they got a decent amount of funding and the same thing was still in use .
I wrote about this as an answer to a Stack Overflow question back then https://stackoverflow.com/questions/5661385/packaging-and-sh...
It was a proto docker without proper process isolation. Just file system isolation using chroot. Definitely brought back memories. The real value docker added was standardization, a central container registry, tooling etc. It mainstreamed these kinds of arcane old school sysad tricks into a generally usable product.