Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Artifact ID: | 16e03d2b94790363295b088b2664368cfc03e7d24294fa56a5cd2ed29e14ff6f |
---|---|
Page Name: | Containers Are Not VMs |
Date: | 2022-10-30 03:52:40 |
Original User: | tangent |
Mimetype: | text/x-markdown |
Parent: | 44770e054b4ed0a9b7697342980931fe596c2687b516bdbc47ae13147f6b7aaf (diff) |
Next | 257664e0c983bf4efb9bc01c18d2d57eac09be4f745a0d1fb9a3d5dda6736ce5 |
Key Distinctions
A classic virtual machine — as used in IT shops since that tech became the new hotness following the launch of VMware — starts with installing a fully-featured host OS, then installing one or more applications atop that. Even when you start with a stripped-down host OS, the resulting VM is likely to have all the features of an old-school Unix server at the least, and likely much more besides.
With a container, it's possible that it has…
- no systemd
- no local shell
- no package manager
- no remote login facility
- no privileged utilities, such as
ping
- no root user, or at least one well and truly nerfed
- no
procps
, or a stripped-down minimalist alternative
Indeed, it's possible to pare a container down to a single executable that runs atop the host kernel, offering no other services.
Even when you deploy a single-purpose service atop an Ubuntu or Alpine base container, it won't be running SSH or a GUI. It won't even be running a multi-user pseudo-tty login daemon. If you can shell in at all, it'll operate more like single-user mode in the classical Unix sense than anything else.
CHR aside, running containers on RouterOS implies use of MikroTik ARM hardware, most of which is rather low-spec. Containers are nearly a joke on most of MikroTik's ARM-based switches, since none of them offer external storage options, as of this writing. For the most part, containers are only sensible on the higher-end ARM-based MikroTik routers due to assorted limitations best left covered elsewhere. To even get the container running at all, you'll be motivated to select the most pared-back sort of containers.
Even a biggish home router like the RB5009 is only about as powerful as a Raspberry Pi 4, and it has plenty of other stuff to do in the background besides running your containers. There are bigger ARM routers in the MikroTik line such as the CCR2116, but if you can justify a CCR, you're unlikely to be able to justify soaking up its considerable specialized power by running flabby containers on it.
These pressures encourage use of classic single-purpose containers rather than those that try to mimic full-OS VMs.
So Why Is a Container Not a VM, Then?
There are good reasons for these limitations, and while most of them have nothing to do with RouterOS in particular, these inherent limitations do make containers a good feature for RouterOS by making them compatible with its design philosophy.
One way this is put is the cattle vs pets analogy. Virtual machines tend to turn into "pets" under this worldview, whereas containers are most often treated as "cattle." This comes out of the design decisions backing each technology. A VM is expensive to set up, it's entirely based on persistent storage, and so it tends to be curated and cuddled and customized. A container is cheap to create, nothing is persistent at the base image layer, you have to go out of your way to persist data outside the container proper, and you're incentivized to script everything, making automated redeployment possible.
The practical upshot of this is that if you were expecting to treat your RouterOS container as a "pet," installing it and then configuring it live, on the device, you're fighting the model. With containers, the intended workflow is to spend a lot of time up-front working on the Dockerfile
to get it working reliably, then deploy it in an automated fashion. This may involve other scripting; I often pair a Dockefile
with a Makefile
so I can give a command like make install
to build, package, and deploy the container to my target, wrapping the multiple steps needed to achieve that in dependency-checked goodness. If I need to make a change, I do it to these source files, then rebuild and redeploy the container.
This means I rarely need to back up containers: I can recreate them at will, so the only thing that needs to be backed up are any external data volumes.
While there are ways to treat VMs as "cattle" (e.g. Vagrant), the result is generally still a fully-featured OS with one or more applications running atop it. It is likely to have none of the limitations listed above.
"I installed the Ubuntu container, and it isn't accepting an SSH connection!"
This is the most common formulation of the problem. It's often followed by some combination of accusations that RouterOS's container infrastructure is:
- bleeding-edge (true)
- thinly-featured (also true)
- poorly-documented (quite true)
- and therefore entirely broken (not true)
If you find yourself slipping down this slope of illogic, it betrays a fundamental misunderstanding of what containers are. It's especially pernicious in the RouterOS world because of the limited nature of the devices that run RouterOS.
I believe the core problem here is an incorrect conflation of three true concepts. Containers are:
- …a type of OS-level virtualization
- …available for all of the popular Linux distros in all the common container registries
- …designed for use by the sorts of people likely to buy RouterOS devices for non-trivial use cases
Virtualization + major Linux distro + automated IT management goodness = VMs, right? No.
If you install a containerized version of your favorite Linux distro on RouterOS per the thin instructions in its manual, you're likely to find that it will load, start, do a whole lot of NOTHING, then stop. Why?
It is because — one more time now — containers are not VMs.
But, Ubuntu!
If you say the word "Ubuntu" to an IT person, it conjures certain expectations, ones which containers do not live up to, on purpose.
Consider the official Docker Ubuntu container: if you click the Tags tab on that page, you'll find that the default ARM64 image is only 29 MiB. How can this be when the Ubuntu Server for ARM ISO is 1.3 GiB? Even the minimal cloud images are around 20× that size. What's the deal?
The deal is, Docker Inc.'s official Ubuntu container has nothing more than the base runtime and enough of the package management system necessary for bootstrapping some other application. It isn't meant to be run as an OS itself.
You can see this by studying its Dockerfile
: all it does is unpack a base filesystem and run bash
. If you create a container from this image on Docker Desktop and run it attached to your terminal, it drops you into this shell. If instead you run it atop RouterOS, it'll be detached from any terminal, so it appears to do nothing. If you want this base container to do something useful, you have to install a program atop it and then tell the container runtime — whether Docker Engine, RouterOS's /container
feature, or something else entirely — what command to give to start it running.
(The same is true of the official Alpine image's Dockerfile
.)
You can find canned alternative containers set up as pet-style VMs on the various container registries. You might even have a big enough RouterOS box that it'll run such an inherently piggy container. (Rule of thumb: containers are measured in megs, VMs in gigs.) Realize, however, that you're going against the nature of containers and RouterOS hardware by taking that path.
General Advice
If you're doing anything tricky at all, you're likely to need Docker Desktop as a build environment, or at least something like it. (Rancher Desktop, Podman Desktop, etc.) If you're going to build the container image on your full-powered desktop OS anyway, why not do at least one test deployment locally? A good many of the problems I see people run into with containers on RouterOS would've occurred under Docker Desktop as well. If your idea doesn't work under Docker Desktop, it certainly isn't going to work under CHR or on RouterOS hardware.
Say you wish to base your container on Docker's official Ubuntu image. My advice is to start your work on Ubuntu Desktop, whether native on your development system or in a VM. Install Docker Engine inside that fully-featured OS, and develop the ideas leading to your finished container there. Iterate your way toward a solid Dockerfile
, then test it on the build host. Once it's working reliably, script the commands to generate, package, and deploy the container atop the host's container runtime. Only once all of that is working should you bother with porting that to your RouterOS platform of choice. You'll save yourself a lot of pain this way, shaking out all the basic problems atop a full-powered container development platform first.
Although this results in double-virtualization in the VM case, it'll still run faster than on RouterOS hardware. It may even run faster than on CHR due to the greater maturity of the platform.
CHR Complications
Another guise I've seen this misconception come up in is in terms of MikroTik's Cloud Hosted Router distribution of RouterOS. In brief, I strongly recommend that you do not run containers atop CHR. As to the why of it, there are two separate sub-cases:
1. Bare-Metal Hypervisor
If you run CHR as it is meant to be deployed in production, atop a bare-metal hypervisor such as ESXi, then in my amazingly humble opinion 😛 there is no good reason at all to run containers atop it. It amounts to double virtualization, so it can only slow your application down. Run your other applications alongside CHR as separate VMs out on the hypervisor; don't run them atop CHR as containers.
If you have an application that only comes as a container — or at least one that's best deployed as a container — then I still recommend that you install a bare-bones container host OS such as Flatcar atop the hypervisor, then run the container out there rather than under CHR. Unlike RouterOS's container feature, a proper container host OS will not be bleeding-edge, thinly-featured, and poorly-documented.
2. Hosted Hypervisor
If you're running CHR on a desktop OS — such as for trying experiments before deploying them to production — you'll be running it under a so-called type-2 hypervisor: KVM, VirtualBox, Hyper-V, VMware Workstation, Parallels… In that case, there is only one good reason I can think of to deploy containers atop CHR: you're testing ideas that you intend to eventually deploy atop RouterOS based hardware.
I believe this case devolves to the general-purpose advice above: install a full-powered container development environment either on the host or under a test VM compatible with your container's base image, and do your development work there. Only once you have the container working should you try to port it to CHR, and then merely to check that it works before deploying to actual hardware.
Second Opinions
You might want to consider the alternate perspectives of:
…you know, people who you would expect to know what they're talking about!
Every one of those articles was found with a single web search, which may turn up more good info for you.