Key Distinctions
I’ve run across a lot of people who think of Linux containers as nothing more than kernel-less VMs. “Hey, look, you can install Ubuntu in a container!” While it is true that you can do that, to stop thinking about the matter at that level will lead you into conceptual errors when you apply that thinking to containers more broadly.
Whereas a virtual machine won’t boot without an operating system, a container might have as little as a single static binary inside, able to do nothing but start that one program when you start the container. When that program stops, the container stops. Out at that extreme, containers have…
- no systemd
- no local shell
- no package manager
- no local login daemon
- no remote login facility
- no
/dev
,/proc
, or/sys
- no privileged utilities (e.g.
ping
) - no proper users, only a well and truly nerfed “root” user1
- no platform libraries, not even foundational basics like
glibc
While such containers are uncommon, they aren’t exactly rare. For many services, they’re the ideal expression of the developer’s intent. The only reason you don’t encounter such containers more often is that unless you’re using tooling like Go which supports this pattern directly, it takes more work to produce single static binary containers. The primary benefit of this practice is that the result is smaller and has fewer breakable pieces.
When you do run across this type of container, it is likely that a one-line change to the Containerfile
will convert it to run atop a more full-featured Linux base. Find the line that says “FROM scratch…
”2 and change it to “FROM ubuntu:latest
” or similar, then rebuild it. Now you have your single static binary running atop an Ubuntu base. That should regain you a local shell at least, plus platform libraries and maybe even a package manager. Woot!
The thing is, the result still won’t have a GUI, and it likely won’t have an SSH daemon running, either. If you choose a “minimal” container base image, it will likely have at least some of the limitations on the list above. It may give you local shell access, but it’ll operate more like an old-school Unix box’s single-user mode than a modern Linux VM. Even at this remove from the ideal, we’re still finding mismatches with the blinkered kernel-less VM view of containers.
The trend going forward in the security-conscious sections of the container industry is to have more of those limitations than less. For example, the Chainguard and Google “distroless” images tick nearly every box on the list above.
What Do We Get From These Differences?
There are good reasons for these limitations. Indeed, one may argue that these “missing” features collectively constitute a feature.
One way to put this is the pets vs circus animals analogy. Virtual machines tend to turn into “pets” under this worldview, whereas containers are most often treated as “circus animals,” a difference that falls directly out of each technology’s foundational design decisions. A VM is expensive to set up, and it’s entirely based on persistent storage, so it tends to be curated and cuddled and customized. A container is cheap to create, and since you have to go out of your way to persist data outside the container proper, you’re incentivized to script everything to make automated redeployment easy when it comes time to upgrade.
The practical upshot of this is that if you were expecting to treat your Podman containers as “pets,” installing each one, then configuring it live as you would with a VM, you’re fighting the model. With containers, the intended workflow is to spend a lot of time up-front working on the Containerfile
to get it working reliably, then deploy it in an automated fashion. This may involve other scripting; I like to pair each Containerfile
with a Makefile
so that I can give a command like make install
to build, package, and deploy the container to my target, wrapping the multiple steps needed to achieve that in dependency-checked goodness. If I need to make a change, I do it to these source files, then rebuild and redeploy the container.
This means I rarely need to back up containers: I can recreate them at will, so the only thing that needs to be backed up are any external data volumes.
While there are ways to treat VMs as “circus animals” (e.g. Vagrant), the result is generally still a fully-featured OS with one or more applications running atop it. It is likely to have none of the limitations listed above.
Application Containers vs System Containers
A common misunderstanding results when a seasoned IT person first encounters containers and follows a line of reasoning that containers are…
- …a type of OS-level virtualization
- …available for all of the popular Linux distros in all the common container registries
- …designed to benefit IT people by solving one of their key problems
This is all true, but this person may then conclude that virtualization + major Linux distro + automated IT management goodness = VMs.
Right?
No. Wrong.
Here’s a quick way to see the key distinction:
$ podman run --rm -it ubuntu cat /etc/os-release
The most likely result is that it will pull the ubuntu:latest
base image, instantiate it, and then in place of its default ENTRYPOINT
program, it will run our cat
command, then stop, having done the sole thing we asked of it. Because we gave the --rm
flag, the Ubuntu container then disappears from ready view, but even if we had not, it will stop and then need to be restarted if we wish to ask anything else of it.
Let’s extend this idea further. Consider this variant:
$ podman run --rm -it ubuntu rm -v /etc/os-release
removed '/etc/os-release'
$ podman run --rm -it ubuntu rm -v /etc/os-release
removed '/etc/os-release'
Why did the file come back after we removed it? Because under Podman, the ubuntu:latest
image we pulled is immutable.3 The first container did remove the file as requested, but that change was lost per the --rm
flag we passed. When we instantiated this container a second time from the same source image, that put everything back to the way it was shipped from the OCI registry. If it were otherwise, the second rm
command would give an error message complaining about “No such file or directory.”
Why? Because — one more time now — containers are not VMs.
But what they they were more VM-like? Wouldn’t that be useful? Can we not split the difference somehow?
Yes, we can, and when we do, we call the result a “system container” in order to contrast it with the classical view above, retroactively named an “application container,” a term used only when making this very distinction. Frequently-encountered instantiations of the system container concept include:
These systems all blur the lines of distinction made above to varying degrees. At the same time, they vary in other details such as whether they require a Linux kernel in the OCI image or not.
Rule of thumb: application containers are measured in megs, system containers in gigs.
“I installed the Ubuntu container, and it isn’t accepting an SSH connection!”
That’s because…
$ podman run --rm -it ubuntu
root@bcb8281227c5:/# dpkg-query -W coreutils
coreutils 9.4-3ubuntu6
root@bcb8281227c5:/# dpkg-query -W systemd
dpkg-query: no packages found matching systemd
root@bcb8281227c5:/# dpkg-query -W openssh-server
dpkg-query: no packages found matching openssh-server
This tells us the image has a basic Linux userland installed but no SSH server, nor even a systemd setup with which to start it and keep it running in the background. This image is clearly designed to be the base of an application container; it is not a system container.
But, Ubuntu!
If you say the word “Ubuntu” to an IT person, it conjures certain expectations, ones which images intended to serve as the base of application containers do not live up to, on purpose.
Consider the official Docker Ubuntu container: if you click the Tags tab on that page, you’ll find that the default ARM64 image is only 29 MiB. How can this be when the Ubuntu Server for ARM ISO is 1.3 GiB? Even the minimal cloud images are around 20× that size. What’s the deal?
The deal is, Docker Inc.’s official Ubuntu container has nothing more than the base runtime and enough of the package management system necessary for bootstrapping some other application. It isn’t meant to be run as an OS itself.
You can see this by studying its Dockerfile
: all it does is unpack a base filesystem and run bash
.5 If you create a container from this image on Docker Desktop and run it attached to your terminal, it drops you into this shell. If you want this base container to do something useful, you have to install a program atop it and then tell the container runtime that launch that instead of a constrained Busybox shell.
You may well ask, what is it that makes docker.io/library/ubuntu:latest
an ”Ubuntu,” then?
The single most salient aspect of this particular image's “Ubuntuness” is the fact that it has apt
installed. Secondly, the relatively small number of programs it has in its /usr/bin
were built on and for one of the main Ubuntu family of distributions, against libraries Canonical was shipping at the time the image was built.
Second Opinions
You might want to consider the alternate perspectives of:
…you know, people who you would expect to know what they’re talking about!
Every one of those articles was found with a single web search, which may turn up more good info for you.
License
This work is © 2022-2025 by Warren Young and is licensed under CC BY-NC-SA 4.0
- ^
See the
default_capabilities
set here. It may be overridden on your system in one of thecontainers.conf
files or be done with the--cap-add/del
flags on a per-container basis. You will have topodman inspect
a given container to find out which capabilities have been applied to it. - ^ If there’s more than one, we want the last instance.
- ^ There are weaker container runners that do not have this property, typically in order to make them exceptionally lightweight by discarding complicated features. This one relies on having a copy-on-write layer between the image proper and the instantiated container so that the modified copy never overwrites the original.
- ^
In contrast to the follow-on
bootc
project, OSTree isn’t strictly OCI-based, though it often is in practice, as in the Fedora Atomic family of Linux distros. - ^
The same is true of the official Alpine image’s
Dockerfile
, except that it runs the Busybox “ash
” implementation.