MikroTik Solutions

Containers Are Not VMs
Login

Key Distinctions

I've run across a lot of people who think of Linux containers as nothing more than kernel-less VMs. "Hey, look, you can install Ubuntu in a container!" While it is true that you can do that, to stop thinking about the matter at that level will lead you into conceptual errors when you apply that thinking to containers more broadly.

Whereas a virtual machine won't boot without an operating system, a container might have as little as a single static binary inside, able to do nothing but start that one program when you start the container. When that program stops, the container stops. Out at that extreme, containers have…

While such containers are uncommon, they aren't exactly rare. For many services, they're the ideal expression of the developer's intent. The only reason you don't encounter such containers more often is simply that it takes more work to produce them. The benefit of doing all that hard work is that the result is smaller and has fewer breakable pieces.

When you do run across this type of container, it is likely that a one-line change to the Dockerfile will convert it to run atop a more full-featured Linux base. Find the line that says "FROM scratch…"1 and change it to "FROM ubuntu:latest" or similar, then rebuild it. Now you have your single static binary running atop an Ubuntu base. That should regain you a local shell at least, plus platform libraries and maybe even a package manager. Woot!

The thing is, the result still won't have a GUI, and it likely won't have an SSH daemon running, either. If you choose a "minimal" container base image, it will likely have at least some of the limitations on the list above. It may give you local shell access, but it'll operate more like an old-school Unix box's single-user mode than a modern Linux VM. Even at this remove from the ideal, we're still finding mismatches with the blinkered kernel-less VM view of containers.

The trend going forward in the security-conscious sections of the container industry is to have more of those limitations than less. For example, the Chainguard and Google "distroless" images tick nearly every box on the list above.

What of RouterOS, Then?

All of this is true of containers in general, but our focus here is on how that interacts with RouterOS's Containers feature.

CHR aside, that feature only works on MikroTik ARM hardware, most of which is rather low-spec. Only the most pared-back containers are usable on MikroTik's ARM-based switches due to internal storage limitations, since none offer external storage options as of this writing. The story is better with MikroTik's ARM-based routers, since a good many do have USB or NVMe expansion options, and across the board they have better CPUs and more internal storage, both flash and RAM.

Yet even so, a biggish home router like the RB5009 is only about as powerful as a Raspberry Pi 4, and it has plenty of other stuff to do in the background besides running your containers. There are bigger ARM routers in the MikroTik line such as the CCR2116, but if you can justify running an expensive CCR, it's a shame to soak up its considerable specialized power merely to run flabby containers. Efficiency continues to matter even at the high end.

These pressures encourage use of classic single-purpose containers rather than those that try to mimic full-OS VMs.

What Do We Get From These Differences?

There are good reasons for these limitations, and while most of them have nothing to do with RouterOS in particular, these inherent limitations do make containers a good feature for RouterOS by making them compatible with its design philosophy.

One way to put this is the pets vs circus animals analogy. Virtual machines tend to turn into "pets" under this worldview, whereas containers are most often treated as "circus animals." This comes out of the design decisions backing each technology. A VM is expensive to set up, and it's entirely based on persistent storage, so it tends to be curated and cuddled and customized. A container is cheap to create, and since you have to go out of your way to persist data outside the container proper, you're incentivized to script everything to make automated redeployment easy when it comes time to upgrade.

The practical upshot of this is that if you were expecting to treat your RouterOS container as a "pet," installing it and then configuring it live on the device, you're fighting the model. With containers, the intended workflow is to spend a lot of time up-front working on the Dockerfile to get it working reliably, then deploy it in an automated fashion. This may involve other scripting; I often pair a Dockefile with a Makefile so I can give a command like make install to build, package, and deploy the container to my target, wrapping the multiple steps needed to achieve that in dependency-checked goodness. If I need to make a change, I do it to these source files, then rebuild and redeploy the container.

This means I rarely need to back up containers: I can recreate them at will, so the only thing that needs to be backed up are any external data volumes.

While there are ways to treat VMs as "circus animals" (e.g. Vagrant), the result is generally still a fully-featured OS with one or more applications running atop it. It is likely to have none of the limitations listed above.

"I installed the Ubuntu container, and it isn't accepting an SSH connection!"

Complaints owing to this type of misapprehension are often followed by some combination of accusations that RouterOS's container infrastructure is:

If you find yourself slipping down this slope of illogic, it betrays a fundamental misunderstanding of what containers are. It's especially pernicious in the RouterOS world because of the limited nature of the devices that run RouterOS.

I believe the core problem here is an incorrect conflation of three true concepts. Containers are:

Virtualization + major Linux distro + automated IT management goodness = VMs, right? No.

If you install a containerized version of your favorite Linux distro on RouterOS per the thin instructions in its manual, you're likely to find that it will load, start, do a whole lot of NOTHING, then stop. Why?

It is because — one more time now — containers are not VMs.

But, Ubuntu!

If you say the word "Ubuntu" to an IT person, it conjures certain expectations, ones which containers do not live up to, on purpose.

Consider the official Docker Ubuntu container: if you click the Tags tab on that page, you'll find that the default ARM64 image is only 29 MiB. How can this be when the Ubuntu Server for ARM ISO is 1.3 GiB? Even the minimal cloud images are around 20× that size. What's the deal?

The deal is, Docker Inc.'s official Ubuntu container has nothing more than the base runtime and enough of the package management system necessary for bootstrapping some other application. It isn't meant to be run as an OS itself.

You can see this by studying its Dockerfile: all it does is unpack a base filesystem and run bash.2 If you create a container from this image on Docker Desktop and run it attached to your terminal, it drops you into this shell. If instead you run it atop RouterOS, it'll be detached from any terminal, so it appears to do nothing. If you want this base container to do something useful, you have to install a program atop it and then tell the container runtime — whether Docker Engine, RouterOS's /container feature, or something else entirely — what command to give to start it running.

You can find canned alternative containers set up as pet-style VMs on the various container registries. You might even have a big enough RouterOS box that it'll run such an inherently piggy container. (Rule of thumb: containers are measured in megs, VMs in gigs.) Realize, however, that you're going against the nature of containers and RouterOS hardware by taking that path.

General Advice

If you're doing anything tricky at all, you're likely to need Docker Desktop as a build environment, or at least something like it. (Rancher Desktop, Podman Desktop, etc.) If you're going to build the container image on your full-powered desktop OS anyway, why not do at least one test deployment locally? A good many of the problems I see people run into with containers on RouterOS would've occurred under Docker Desktop as well. If your idea doesn't work on your development machine, it certainly isn't going to work under CHR or on RouterOS hardware.

Say you wish to base your container on Docker's official Ubuntu image. My advice is to start your work on Ubuntu Desktop, whether native on your development system or in a VM. Install Docker Engine inside that fully-featured OS, and develop the ideas leading to your finished container there. Iterate your way toward a solid Dockerfile, then test it on the build host. Once it's working reliably, script the commands to generate, package, and deploy the container atop the host's container runtime. Only once all of that is working should you bother with porting that to your RouterOS platform of choice. You'll save yourself a lot of pain this way, shaking out all the basic problems atop a full-powered container development platform first.

Although this results in double-virtualization in the VM case, it'll still run faster than on RouterOS hardware. It may even run faster than on CHR due to the greater maturity of the platform.

CHR Complications

Another guise I've seen this misconception come up in is in terms of MikroTik's Cloud Hosted Router distribution of RouterOS. In brief, I strongly recommend that you do not run containers atop CHR. As to the why of it, there are two separate sub-cases:

1. Bare-Metal Hypervisor

If you run CHR as it is meant to be deployed in production, atop a bare-metal hypervisor such as ESXi, then in my amazingly humble opinion 😛 there is no good reason at all to run containers atop it. It amounts to double virtualization, so it can only slow your application down. Run your other applications alongside CHR as separate VMs out on the hypervisor; don't run them atop CHR as containers.

If you have an application that only comes as a container — or at least one that's best deployed as a container — then I still recommend that you install a bare-bones container host OS such as Flatcar atop the hypervisor, then run the container out there rather than under CHR. Unlike RouterOS's container feature, a proper container host OS will not be bleeding-edge, thinly-featured, and poorly-documented.

2. Hosted Hypervisor

If you're running CHR on a desktop OS under a so-called type-2 hypervisor,3 there is only one good reason I can think of to deploy containers atop CHR: you're testing ideas that you intend to eventually deploy atop RouterOS based hardware.

I believe this case devolves to the general-purpose advice above: install a full-powered container development environment either on the host or under a test VM compatible with your container's base image, and do your development work there. Only once you have the container working should you try to port it to CHR, and then merely to check that it works before deploying to actual hardware.

Second Opinions

You might want to consider the alternate perspectives of:

…you know, people who you would expect to know what they're talking about!

Every one of those articles was found with a single web search, which may turn up more good info for you.

License

This work is © 2022-2024 by Warren Young and is licensed under CC BY-NC-SA 4.0


  1. ^ If there's more than one, we want the last instance.
  2. ^ The same is true of the official Alpine image's Dockerfile, except that it runs the Busybox "ash" implementation.
  3. ^ KVM, VirtualBox, Hyper-V, VMware Workstation, Parallels…