Foundations
I've calved off a companion article by the same name for a general audience of Podman users. This is my preferred tool for creating and testing RouterOS containers, and nearly every point made there applies to RouterOS. Please do start there, then come back here for the peripheral details specific to container.npk
, the RouterOS package providing but a small subset of Podman’s broad feature suite.
Key Misconceptions Punctured
There are several accusations commonly made by users familiar with containers on other platforms but new to container.npk
. They will say that RouterOS’s feature set is…
- bleeding-edge (true)
- thinly-featured (also true)
- poorly-documented (quite true)
- and therefore entirely broken (not true)
If you find yourself slipping down this slope of illogic, it typically betrays a fundamental misunderstanding that the companion article goes to some pains to sort out. I said it above, and I will say it again: go read that, first, please.
With that grounding, we can now focus on RouterOS-specific details.
Several of the designed-in limitations of container.npk
combine to produce unfortunate practical effects. Here’s a biggie:
> /container
/container> add remote-image=docker.io/library/ubuntu interface=veth1
/container> start ubuntu
/container> shell cmd="cat /etc/os-release" ubuntu
container not running
If you then say print
to find out why it gave that error, you find the “S
” flag, meaning it’s stopped. Why? You just started it, right?
Yes indeed, but does that mean it’s broken? Nope. What it did was start the default ENTRYPOINT=/bin/sh
detached from any terminal, which caused the shell to immediately exit, having done everything it is possible for it to do under that condition.
But why?
It is because containers are not VMs.
You can try and fight it, if you like:
$ ssh myrouter
> /container
/container> add …Ubuntu System Container You Found Somwehere…
/container> shell 0
…aha, success!…
# sudo apt install $A_BUNCH_OF_STUFF
# sudo systemctl enable $MY_NEW_SERVICE
…and so forth.
That's easy, and it comforts old Linux sysadmins, but I believe the rewards of using container.npk
in accord with its design limitations will pay off quickly. Pounding nails in with the butt of a screwdriver is liable to cause grievous hand injuries.
It would be an overreach to insist on going full-on microservices to better suit the model, but it must be said that this particular design trope is highly compatible with the intentional design limitations of container.npk
. RouterOS loves microservices.
A good rule of thumb is that application containers are measured in megs, system containers in gigs.
The facts on the ground underpinning this advice might change in the future, and I may then change this article to track that developing trend line. The best current reason to expect any movement here is the advent of the RDS2216, which has the storage options to allow disregarding that megs-vs-gigs distinction. Yet, fundamental limitations remain between container.npk
and something like Podman, which is why I still advise you to resist the urge to plan on a mass migration of your flock of VMs onto a ROSE server packed full of SSDs.
You can blur the lines, but ultimately if you find yourself going down this path, moving your workloads to a proper container-focused OS is a better use of your resources.
MikroTik ARM Hardware
At a high level, container.npk
package is built for two platforms only. We will get to the second of these — CHR — below, but for now, we wish to focus on the primary deployment target, MikroTik’s ARM-based hardware devices.
With but rare exceptions1 this hardware is all fairly low-spec. Indeed, MikroTik is famous for delivering a tremendous amount of networking grunt with very little in the way of hardware resources.
The problem this then raises is, many containers are designed with high-end desktop or server-grade hardware in mind. The tech comes out of the corporate IT and cloud computing worlds, where computers are measured in terms of the tonnage limitations on forklifts and the BTU ratings on industrial air-handling units. In that world, the relevant question is not whether you have enough CPU and RAM to run a container, it’s how many CPUs you wish to run a given container across, in parallel!
The RouterOS world differs materially. We may well run many MikroTik devices on a single LAN, but there is little point in trying to wrangle them into service as a Kubernetes cluster.
The area where we see this distinction most starkly is in MikroTik’s ARM-based switches, which can barely run containers at all. Because they’re designed to offload most of the processing to the switch chip, they include fairly weak CPUs, and atop that, they tend to have very little in the way of free storage, both flash and RAM. It would be an overreach to say they’re single-purpose devices — indeed, they are surprisingly powerful and flexible — but adding more tasks via containers is best done carefully, with these restrictions kept firmly in mind.
The story is better with MikroTik’s ARM-based routers, since a good many do have USB or NVMe expansion options, and across the board they have better CPUs and more internal storage, both flash and RAM.
Yet even so, a biggish home router like the RB5009 is only about as powerful as a Raspberry Pi 4, and it has plenty of other stuff to do in the background besides running your containers. There are bigger ARM routers in the MikroTik line such as the CCR2116, but if you can justify running an expensive CCR, it’s a shame to soak up its considerable specialized power merely to run flabby containers. Efficiency continues to matter even at the high end.
These pressures encourage use of application containers, as opposed to system containers. Under that restriction, containers mate well with RouterOS’s design philosophy.
General Advice for Building RouterOS Containers
I find it easiest to develop a new container on top of a full OS that is as similar as possible to the one you’re using for the container’s base layer. I’m a particular fan of Alpine when it comes to RouterOS containers, but for other purposes it might be Debian or Red Hat UBI. This allows you to try things out interactively without reattempting a potentially costly build. Instead, apply each lesson learned to the Containerfile
incrementally, and only begin build trials once you gain a measure of confidence that it should work.
Then, having gotten the image to build, do at least one test deployment locally, on the build host. A good many of the problems I see people run into with containers on RouterOS would’ve occurred under Podman Desktop as well. If your idea doesn’t work on your development machine, it certainly isn’t going to work under the highly specialized CHR environment or a constrained ARM device running RouterOS.
Next, script the commands to generate, package, and deploy that image atop the development host’s container runtime.
Only once all of that is working should you bother with porting that to your RouterOS platform of choice. You’ll save yourself a lot of pain this way, shaking out all the basic problems atop a full-powered container development platform first.
Although use of a VM results in double-virtualization, it’ll still run faster than on RouterOS hardware.
CHR Complications
That brings us to MikroTik’s Cloud Hosted Router distribution of RouterOS, the other major option for using container.npk
, after MikroTik's ARM hardware. In brief, I strongly recommend that you do not run containers atop CHR. As to the why of it, there are three separate sub-cases:
1. “Cloud Hosted Router”
The name says it all: the intent is that you run this Router Hosted out in the Cloud, a euphemism for “someone else’s hardware.” Popular options for this which have first-party support for CHR include Hetzner, Digital Ocean, and AWS.
If this is your situation, you might be tempted by RouterOS’s container feature, thinking it lets you use your CHR as a poor-man’s Proxmox in the cloud, thereby saving you from starting additional cloud host instances for the other containers. The primary problem with this idea is the bare-bones nature of RouterOS’s container engine. You might be able to make it work, but make no mistake: you’ll be missing a lot compared to having a full-featured container engine underneath you.
2. Bare-Metal Hypervisor
If instead you’re running CHR on-prem atop a bare-metal hypervisor such as ESXi, then in my amazingly humble opinion 😛 there is no good reason at all to run containers atop it. It amounts to double virtualization, so it can only slow your application down. Run your other applications alongside CHR as separate VMs out on the hypervisor; don’t run them atop CHR as containers.
If you have an application that only comes as a container — or at least one that’s best deployed as a container — then I still recommend that you install a bare-bones container host OS such as Flatcar or CoreOS atop the hypervisor, then run the container out there rather than under CHR. Unlike RouterOS’s container feature, a proper container host OS will not be bleeding-edge, thinly-featured, and poorly-documented.
3. Hosted Hypervisor
If you’re running CHR on a desktop OS under a so-called type-2 hypervisor,2 there is only one good reason I can think of to deploy containers atop CHR: you’re testing ideas that you intend to eventually deploy atop RouterOS based hardware.
I believe this case devolves to the general-purpose advice above: install a full-powered container development environment either on the host or under a test VM compatible with your container’s base image, and do your development work there. Only once you have the container working should you try to port it to CHR, and then merely to check that it works before deploying to actual hardware.
Conclusion
Do not mistake the purpose behind my criticisms and warnings above. I think container.npk
is a tremendous addition to RouterOS. All I wish to get across is that it is a tool best used in accord with its design, lest you cause yourself unnecessary pain.
License
This work is © 2022-2025 by Warren Young and is licensed under CC BY-NC-SA 4.0