1
2
3
4
5
6
7
8
9
10
11
12
|
1
2
3
4
5
6
7
8
9
10
11
12
|
-
+
|
# Motivation
The [RouterOS `container.npk` feature](https://help.mikrotik.com/docs/display/ROS/Container) is highly useful, but it is a custom development written in-house by MikroTik, not a copy of Docker Engine or any of the other server-grade container engines.(^Podman, LXC/LXD, etc.) Because of the stringent resource constraints on the bulk of MikroTik’s devices, it is exceptionally small, thus unavoidably very thinly featured compared to its big-boy competition. If we can use installed size as a proxy for expected feature set size, we find:
* **Docker Engine**: 422 MiB(^Version 27.1.1, according to `dnf remove docker-ce…` after installing these packages [per the instructions](https://docs.docker.com/engine/install/rhel/#install-docker-engine). Note also that this is the “engine” alone, leaving out the extra gigabyte of stuff that makes up Docker Desktop. This is what you'd run on a remote server, the closest situation to what a headless RouterOS box provides.)
* **Docker Engine**: 422 MiB(^Version 27.1.1, according to `dnf remove docker-ce…` after installing these packages [per the instructions](https://docs.docker.com/engine/install/rhel/#install-docker-engine). Note also that this is the “engine” alone, leaving out the extra gigabyte of stuff that makes up Docker Desktop. This is what you’d run on a remote server, the closest situation to what a headless RouterOS box provides.)
* **`containerd`+`nerdctl`**: 174 MiB(^This is essentially Docker Engine minus the build tooling. The size is for version 2.0.0-rc1 of `nerdctl` plus the `containerd` from the Docker Engine CE install above, according to `sudo dnf remove containerd` and `du -sh nerdctl`.)
* **Podman**: 107 MiB(^Version 4.9.4 on EL9, according to `sudo dnf remove podman conmon crun`.)
* **`container.npk`**: _0.0626 MiB_(^Version 7.15.2, according to `/system/package/print`.)
And this is fine! RouterOS serves a particular market, and its developers are working within those constraints. The intent here is to provide a mapping between what people expect of a fully-featured container engine and what you actually get in RouterOS. Where it makes sense, I try to provide workarounds for missing features and guidance to alternative methods where RouterOS’s way merely *works* differently.
|
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
|
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
|
-
+
|
I resorted to that “sleep 3600” hack in order to work around the lack of interactive mode in `container.npk`, without which containers of this type will start, do a whole lot of _nothing_, and then stop. I had to give it some type of busy-work to keep it alive long enough to let me shell in and do my actual work. This sneaky scam is a common one for accomplishing that end, but it has the downside of requiring you to predict how long you want the container to run before stopping; this version only lasts an hour.
If you are imaging more complicated methods for keeping containers running in the background when they were designed to run interactively, you are next liable to fall into the trap that…
# <a id="cmd"></a>There Is No Host-Side Command Line Parser
The RouterOS CLI isn't a Bourne shell, and the container feature's `entrypoint` and `cmd` option parsers treats them as simple strings, without any of the parsing you get for free when typing `docker` commands into a Linux command shell. The net effect of all this is that you’re limited to two-word commands, one in `entrypoint` and the other in `cmd`, as in the above “`sleep 3600`” hack.
The RouterOS CLI isn’t a Bourne shell, and the container feature’s `entrypoint` and `cmd` option parsers treats them as simple strings, without any of the parsing you get for free when typing `docker` commands into a Linux command shell. The net effect of all this is that you’re limited to two-word commands, one in `entrypoint` and the other in `cmd`, as in the above “`sleep 3600`” hack.
But how then do you say something akin to the following under RouterOS?
docker run -it alpine:latest ls -lR /etc
You might want to do that in debugging to find out what a given config file is called and exactly where it is in the hierarchy so that you can target it with a `mount=…` override. If you try to pass it all as…
|
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
|
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
|
-
+
-
+
-
+
-
+
-
+
-
+
+
+
-
+
-
+
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
|
It is for this inherent reason that `container.npk` cannot provide equivalents of Docker’s `attach` command, nor its “`docker run --attach`” flag, nor the common “`docker run -it`” option pair. The closest it comes to all this is its [`shell`](#shell) command implementation, which can connect your local terminal to a true remote Linux terminal subsystem. Alas, that isn’t a close “`run -it`” alternative because you’re left typing commands at this remote shell, not at the container’s `ENTRYPOINT` process. Even then, it doesn’t always work since a good many containers lack a `/bin/sh` program inside the container in the first place, on purpose, typically to reduce the container’s attack surface.(^Indeed, all of [my public containers](https://hub.docker.com/repositories/tangentsoft) elide the shell for this reason.)
# <a id="logs"></a>Log Handling
Although Docker logging is tied into this same Linux terminal I/O design, we cannot blame the lack of an equivalent to “`docker logs`” on the RouterOS design principles in the same manner as [above](#terminal). The cause here is different, stemming first from the fact that RouterOS boxes try to keep logging to a minimum by default, whereas Docker logs everything the container says, without restriction. RouterOS takes the surprising default of logging to volatile RAM in order to avoid burning out the flash. Additionally, it ignores all messages issued under “topics” other than the four preconfigured by default, which does not include the “container” topic you get access to by installing `container.npk`.
To prevent your containers' log messages from being sent straight to the bit bucket, you must say:
To prevent your containers’ log messages from being sent straight to the bit bucket, you must say:
/container/{add,set} … logging=yes
/system/logging add topics=container action=…
Having done so, we have a new limitation to contend with: RouterOS logging isn’t as powerful as the Docker “`logs`” command, which by default works as if you asked it, “Tell me what this particular container logged since the last time I asked.” RouterOS logging, on the other hand, mixes everything together in real time, requiring you to dig through the history manually.
(The same is true of `podman logs`, except that it ties into systemd’s unified “journal” subsystem, a controversial design choice that ended up paying off handsomely when Podman came along and wanted to pull up per-container logs to match the way Docker behaved.)
# <a id="cpu"></a>CPU Limitations
There are two subclasses:
This limitation comes in two subclasses:
## <a id="emu"></a>There Is No CPU Emulation
## <a id="emu"></a>There Is No Built-In CPU Emulation
Docker and Podman both let you transparently emulate foreign CPU types, which lets you run an image built for another CPU architecture on your local system. If you are on an x86_64 host, this should drop you into a command shell:
$ docker run --rm -it --platform linux/arm64 alpine:latest
The same will work on recent versions of Podman, and you can get it to work on old versions of Podman with a bit of manual setup.(^It's off-topic to go into the details here, but it amounts to “`podman machine ssh`” followed by a “`dnf install qemu-static-*`” command.)
The same will work on recent versions of Podman, and you can get it to work on old versions of Podman with a bit of manual setup.(^It’s off-topic to go into the details here, but it amounts to “`podman machine ssh`” followed by a “`dnf install qemu-static-*`” command.)
For that to work under `container.npk`, they would have to do the same type of thing: ship the QEMU and Linux kernel binfmt bridges needed to get the OS to accept these "foreign" binaries. Since it would approximately double the size of RouterOS to do this for all the popular CPU architectures, they naturally do not do this.
For that to work under `container.npk`, the RouterOS developers would have to ship the QEMU and Linux kernel [`binfmt_misc`](https://en.wikipedia.org/wiki/Binfmt_misc) bridges needed to get the OS to accept these “foreign” binaries. Since it would approximately double the size of RouterOS to do this for all the popular CPU architectures, they naturally chose _not_ to do this.
What this means in practice is that you have to be sure the images you want to use were built for the CPU type in your RouterOS device. This is true even between closely-related platforms. An ARM64 router won't run a 32-bit ARMv7 image, if only because it is assuming a 32-bit Linux kernel syscall interface.
What this means in practice is that you have to be sure the images you want to use were built for the CPU type in your RouterOS device. This is true even between closely-related platforms. An ARM64 router won’t run a 32-bit ARMv7 image, if only because it will assume a 32-bit Linux kernel syscall interface.
<a id="qemu"></a>There is an exception: you can ship your own CPU emulation. Take [this thread](https://forum.mikrotik.com/viewtopic.php?t=189485), for example, which describes a container that bundles the 32-bit Intel-compiled `netinstall-cli` Linux binary along with an ARM build of of `qemu-i386` so that it will run on ARM RouterOS boxes. For a process that isn’t CPU-bound — and netinstall is very much I/O-bound — this can be a reasonable solution as long as you’re willing to pay the ~4 megs the emulator takes up.
## <a id="compat"></a>It Only Supports Intel and ARM
MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, even in new products like their CRS5xx line. Atop that, there are other CPU architectures in the mix like PowerPC and TILE. MikroTik doesn't ship a `container.npk` for any of these platforms.
MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, most recently as of this writing in their [CRS518-16XS-2XQ-RM](https://mikrotik.com/product/crs518_16xs_2xq). Atop that, there are other CPU architectures in the historical mix like PowerPC and TILE. MikroTik doesn’t ship a `container.npk` for any of these platforms.
But why?
But why not?
The simple reason is, the major container build toolchains — mainly `docker buildx` and its clones - don't target these other CPU types.
The simple reason is, the major container build toolchains — mainly `docker buildx` and its clones — don’t target these other CPU types, a fact you can verify for yourself by running…
$ docker run --privileged --rm tonistiigi/binfmt
“But **a-hah**,” you cry! “There's MIPS *and* PPC in that list!” You then demand, “Where’s my build of `container.npk` for them, then?”
Did you overlook the “64” in those outputs? These are for modern 64-bit versions of these CPUs, but MikroTik never shipped any 64-bit MIPS or PowerPC CPUs, not even the 100G behemoth linked above, which gets by with a 32-bit MIPSBE based CPU because it is designed as a switch, not a router, offloading nearly all traffic to the switch ASIC.
What the above command tells you is that you can build images using commands like this:
docker buildx --platform linux/mips64 …
You can do that all day long, but nothing you do will force a MIPSBE build of `container.npk` to run the resulting binaries short of including a CPU emulator in the image, [per above](#qemu).
You may then point out that you don’t actually need the cross-compilation toolchain to exist in Docker proper. FOSS toolchains do exist for TILE, 32-bit PPC, MMIPS, SMIPS… Why can’t you use them to cross-compile binaries on your desktop machine and then use tools like [Buildah](https://buildah.io/) to copy those binaries into the image unchanged?
You can, but now you’ve bought several new problems:
1. Until someone actually does this and provides a compelling case to MikroTik that they should expend the effort to build `container.npk` for those old CPU designs, my sense of MikroTik’s public statements on this matter is that they have no interest in spending the engineering time. It not quite a classic chicken-and-egg situation, but without working images in hand, I don’t see a bid to make MikroTik append this task to its development pipeline succeeding.(^I base that interpretation on the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351).)
2. It’s only “easy” for static binaries. If your program requires third-party shared libraries, you have to build them, too, along with the dynamic link loader, and… You’re likely to find yourself building a small Linux distribution as dependency leads to dependency, and now you're off on a major [yak-shaving](https://www.hanselman.com/blog/yak-shaving-defined-ill-get-that-done-as-soon-as-i-shave-this-yak) expedition.
3. This plan ignores one of the most powerful ideas in the OCI container ecosphere: you don’t cross-compile programs at all, you boot a Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries. What this means in practice is that if you can assemble:
* a sufficiently complete Linux distro ported to your target platform; _and_
* a version of QEMU that will run it on your development system; _and_
* the binfmt patches necessary to let your local kernel tie the two together…
…then you can get the `buildx` tooling to build foreign binaries under all this that will run on your target platform without you ever needing to mess about with cross-compiler toolchains.
And so there you have it: the method by which you can take up the challenge I laid out in my forum post above. Find or build a PowerPC or MMIPS or SMIPS or TILE Linux distro, then use _that_ to build OCI images. At that point, MikroTik _might_ be forced to take notice and provide a build of `container.npk` for that platform.
Until someone breaks this logjam, it’s fair enough to say that RouterOS’s container runner only supports ARM and Intel CPUs.
# <a id="tlc"></a>Remaining Top-Level Commands
So ends my coverage of the heavy points. Everything else we can touch on briefly, often by reference to matters covered previously.
For lack of any better organization principle, I’ve chosen to cover the remaining `docker` CLI commands in alphabetical order. I skip over short aliases like `docker rmi` for `docker image rm` in order to cover things only once, and I don’t repeat any of the `create`/`load`/`run` discussion [above](#create). Because Podman cloned the Docker CLI, this ordering matches up fairly well with its top-level command structure as well, the primary exception being that I do not currently go into any of Podman’s pure extensions, ones such as its eponymous `pod` command.
|