MikroTik Solutions

Containers Are Not VMs
Login

Containers Are Not VMs

Foundations

Much of what I originally wrote here is now calved off into a companion article by the same name, aimed at a more general audience of Podman users. Podman is my preferred tool for creating and testing RouterOS containers, and nearly every point made in that other article applies to RouterOS.

With that basis, let us now consider container.npk, the RouterOS package providing a small subset of Podman’s broad feature suite.

MikroTik ARM Hardware

At a high level, it may be said that the container.npk package is built for only two platforms. We will get to the second of these — CHR — below, but for now, we wish to focus on the primary deployment target, MikroTik’s ARM-based hardware devices.

With but rare exceptions1 this hardware is all fairly low-spec. Indeed, MikroTik is famous for delivering a tremendous amount of networking grunt with very little in the way of hardware resources.

The problem this then raises is, many containers are designed with high-end desktop or server-grade hardware in mind. The tech comes out of the corporate IT and cloud computing worlds, where computers are measured in terms of the tonnage limitations on forklifts and the BTU ratings on industrial air-handling units. In that world, the relevant question is not whether you have enough CPU and RAM to run a container, it’s how many CPUs you wish to run a given container across, in parallel!

The RouterOS world differs materially. We may well run many MikroTik devices on a single LAN, but there is little point in trying to wrangle them into service as a Kubernetes cluster.

The area where we see this distinction most starkly is in MikroTik’s ARM-based switches, which can barely run containers at all. Because they’re designed to offload most of the processing to the switch chip, they include fairly weak CPUs, and atop that, they tend to have very little in the way of free storage, both flash and RAM. It would be an overreach to say they’re single-purpose devices — indeed, they are surprisingly powerful and flexible — but adding more tasks via containers is best done carefully, with these restrictions kept firmly in mind.

The story is better with MikroTik’s ARM-based routers, since a good many do have USB or NVMe expansion options, and across the board they have better CPUs and more internal storage, both flash and RAM.

Yet even so, a biggish home router like the RB5009 is only about as powerful as a Raspberry Pi 4, and it has plenty of other stuff to do in the background besides running your containers. There are bigger ARM routers in the MikroTik line such as the CCR2116, but if you can justify running an expensive CCR, it’s a shame to soak up its considerable specialized power merely to run flabby containers. Efficiency continues to matter even at the high end.

These pressures encourage use of application containers, as opposed to system containers. Under that restriction, containers mate well with RouterOS’s design philosophy.

Key Misconception Punctured

There are several accusations commonly made by users familiar with containers on other platforms but new to container.npk. They will say it is…

If you find yourself slipping down this slope of illogic, it typically betrays a fundamental misunderstanding that the companion article goes to some pains to sort out. It is not only as true under RouterOS as under big-boy engines that containers are not VMs, none of the “system container” schemes the other article lists are suitable for use on RouterOS hardware. You can blur the lines, and on certain devices like the RDS2216 that can make a type of sense, but ultimately if you’re going down this path, moving your workloads to a proper container-focused OS is a better use of your time.

Several of the designed-in limitations of container.npk combine to produce unfortunate practical effects. Here’s a biggie:

> /container
/container> add remote-image=docker.io/library/ubuntu interface=veth1
/container> start ubuntu
/container> shell cmd="cat /etc/os-release" ubuntu 
container not running

If you then say print to find out why it gave that error, you find the “S” flag, meaning it’s stopped. Why? You just started it, right?

Yes indeed, but does that mean it’s broken? Nope. What it did was start the default ENTRYPOINT=/bin/sh detached from any terminal, which caused the shell to immediately exit, having done everything it is possible for it to do under that condition.

But why?

It is because — one more time now — containers are not VMs.

You may then say that this is a bad example, and indeed, it is possible to find canned alternative containers set up as pet-style system containers, not as we have here, a base image intended for constructing single-purpose application containers. Presuming you even have a big enough RouterOS box that it’ll run such an inherently piggy container — a good rule of thumb is that application containers are measured in megs, system containers in gigs — do realize that you’re fighting the container.npk design.

The advent of the RDS2216 gives us reason to believe this might change in the future, and I may change this article later if my current projections of the available curves land us anywhere near where I think they might. Until then, we have to go with the evidence at hand. My advice is to resist the urge to plan on a mass migration of your flock of VMs onto a ROSE server packed full of SSDs.

General Advice for Building RouterOS Containers

One school of thought is to ignore everything above and install a system container onto RouterOS and then try to treat it as a kernel-less VM. Shell in, sudo apt install $A_BUNCH_OF_STUFF, then sudo systemctl enable $MY_NEW_SERVICE, and so forth.

I believe it is far better to limit yourself to the old-school container design space: build single-purpose application containers, each pared back as far as you can manage while still delivering the desired service. I will not insist that you go full-on microservices here, but it must be said that that design trope is highly compatible with the intentional design limitations of container.npk.

I find it easiest to develop a new container on top of a full OS that is as similar as possible to the one you’re using for the container’s base layer. I’m a particular fan of Alpine when it comes to RouterOS containers, but for other purposes it might be Debian or Red Hat UBI. This allows you to try things out interactively without reattempting a potentially costly build. Instead, apply each lesson learned to the Containerfile incrementally, and only begin build trials once you gain a measure of confidence that it should work.

Then, having gotten the image to build, do at least one test deployment locally. A good many of the problems I see people run into with containers on RouterOS would’ve occurred under Podman Desktop as well. If your idea doesn’t work on your development machine, it certainly isn’t going to work under the highly specialized CHR environment or a constrained ARM device running RouterOS.

Next, script the commands to generate, package, and deploy that image atop the development host’s container runtime.

Only once all of that is working should you bother with porting that to your RouterOS platform of choice. You’ll save yourself a lot of pain this way, shaking out all the basic problems atop a full-powered container development platform first.

Although use of a VM results in double-virtualization, it’ll still run faster than on RouterOS hardware. It may even run faster than on CHR due to the greater maturity of the platform.

CHR Complications

Another guise I’ve seen this misconception come up in is in terms of MikroTik’s Cloud Hosted Router distribution of RouterOS. In brief, I strongly recommend that you do not run containers atop CHR. As to the why of it, there are three separate sub-cases:

1. “Cloud Hosted Router”

The name says it all: the intent is that you run this Router Hosted out in the Cloud, a euphemism for “someone else’s hardware.” Popular options for this which have first-party support for CHR include Hetzner, Digital Ocean, and AWS.

If this is your situation, you might be tempted by RouterOS’s container feature, thinking it lets you use your CHR as a poor-man’s Proxmox in the cloud, thereby saving you from starting additional cloud host instances for the other containers. The primary problem with this idea is the bare-bones nature of RouterOS’s container engine. You might be able to make it work, but make no mistake: you’ll be missing a lot compared to having a full-featured container engine underneath you.

2. Bare-Metal Hypervisor

If instead you’re running CHR on-prem atop a bare-metal hypervisor such as ESXi, then in my amazingly humble opinion 😛 there is no good reason at all to run containers atop it. It amounts to double virtualization, so it can only slow your application down. Run your other applications alongside CHR as separate VMs out on the hypervisor; don’t run them atop CHR as containers.

If you have an application that only comes as a container — or at least one that’s best deployed as a container — then I still recommend that you install a bare-bones container host OS such as Flatcar or CoreOS atop the hypervisor, then run the container out there rather than under CHR. Unlike RouterOS’s container feature, a proper container host OS will not be bleeding-edge, thinly-featured, and poorly-documented.

3. Hosted Hypervisor

If you’re running CHR on a desktop OS under a so-called type-2 hypervisor,2 there is only one good reason I can think of to deploy containers atop CHR: you’re testing ideas that you intend to eventually deploy atop RouterOS based hardware.

I believe this case devolves to the general-purpose advice above: install a full-powered container development environment either on the host or under a test VM compatible with your container’s base image, and do your development work there. Only once you have the container working should you try to port it to CHR, and then merely to check that it works before deploying to actual hardware.

It is for this same basic reason that both columns are filled in this table:

OS Family VM Manager Container Engine
Proxmox VE primary feature Proxmox Containers
QNAP QTS/QuTS Virtualization Station Container Station
Synlology DSM Virtual Machine Manager Container Manager
TrueNAS SCALE Virtualization Jailmaker

If containers and VMs are interchangeable, why do each of these platforms offer both?

License

This work is © 2022-2025 by Warren Young and is licensed under CC BY-NC-SA 4.0


  1. ^ As of this writing, that is primarily the CCR2116, the CCR2216, and the new CCR-based RDS series.
  2. ^ KVM, VirtualBox, Hyper-V, VMware Workstation, Parallels…