227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
|
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
|
-
+
-
-
-
+
+
+
-
+
+
-
+
+
-
+
-
+
+
+
|
# <a id="cpu"></a>CPU Limitations
This limitation comes in two subclasses:
## <a id="emu"></a>There Is No Built-In CPU Emulation
Docker lets you run an image built for another architecture on your local system through transparent CPU emulation. If you are on an x86_64 host, try this command:
Docker and Podman allow you run an image built for another architecture on your local system through transparent CPU emulation. If you are on an x86_64 host, try this command:
$ docker run --rm -it --platform linux/arm/v7 alpine:latest uname -m
That should yield “`armv7l`”, an entirely different CPU architecture from your host. Even if you try this on an ARM64 host (e.g. an Apple Silicon macOS box) you still need transparent CPU emulation to cope with the different machine word size.
You will get the same result on recent versions of Podman, and you can get it to work on old versions of Podman with a bit of manual setup.(^It’s off-topic to go into the details here, but it amounts to “`podman machine ssh`” followed by a “`dnf install qemu-static-*`” command.)
For that to work under `container.npk`, the RouterOS developers would have to do the same thing Docker and Podman do: ship the QEMU and Linux kernel [`binfmt_misc`](https://en.wikipedia.org/wiki/Binfmt_misc) bridges needed to get the OS to accept these “foreign” binaries. Since it would approximately double the size of RouterOS to do this for all the popular CPU architectures, they naturally chose _not_ to do this.
What this means in practice is that you have to be sure the images you want to use were built for the CPU type in your RouterOS device.
<a id="qemu"></a>There is a path around this obstacle: ship your own CPU emulation, as was done in [this forum thread](https://forum.mikrotik.com/viewtopic.php?t=189485), which describes a container that bundles the 32-bit Intel-compiled `netinstall-cli` Linux binary along with an ARM build of of `qemu-i386` so that it will run on ARM RouterOS boxes. For a process that isn’t CPU-bound — and NetInstall is very much I/O-bound — this can be a reasonable solution as long as you’re willing to pay the ~4 megs the emulator takes up.
## <a id="compat"></a>It Only Supports Intel and ARM
## <a id="compat"></a>Intel and ARM Only
If you [run the binfmt test image](https://github.com/tonistiigi/binfmt#build-test-image) under your container build system of choice,(^Simplest method: `docker run --privileged --rm tonistiigi/binfmt`) it is likely to list several CPU types besides Intel and ARM, but that only tells you which platforms you can build an image for, not which platforms your runner — `container.npk` in this case — will accept. The prior point about lack of CPU emulation means you must find exact matches in this list for the CPU type in your chosen RouterOS device.
MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, most recently as of this writing in their [CRS518-16XS-2XQ-RM](https://mikrotik.com/product/crs518_16xs_2xq). Atop that, there are other CPU architectures in the historical mix like PowerPC and TILE. MikroTik doesn’t ship a `container.npk` for any of these platforms.
But why not?
To bring up each new build target, the creators of your container build toolchain of choice must bring together:
* a QEMU emulator for the target system
* a sufficiently complete Linux distro ported to that target
* the `binfmt_misc` kernel modules that tie these two together
QEMU is “easy” in the sense that the hard work has already been done; there are QEMU emulators for every CPU type MikroTik ever shipped. ([Details](https://www.qemu.org/docs/master/system/targets.html)) There’s a partial exception with TILE, which once existed in QEMU core but has been removed for years, following the removal of TILE support from the Linux kernel. The thing is, TILE hasn’t progressed in the meantime, so bringing up a QEMU TILE emulator should be a matter of digging that old code back out of source control, then putting in the work to port it to a decade-newer version of Linux.
The binfmt piece is also easy enough.
That leaves the Linux distros for the target platforms, used as container base images. That’s the true sticking point.
One of the most powerful ideas in the OCI container ecosphere is that you don’t cross-compile programs, you boot an _existing_ Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries, which the `binfmt_misc` piece then turns back around and runs under QEMU again.
One of the most powerful ideas in the OCI container ecosphere is that you don’t cross-compile programs, you boot an _existing_ Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries, which the `binfmt_misc` piece then turns back around and runs under QEMU again. The hard work goes into producing the OS image, after which it’s less work overall this way.
The trick is finding that base OS image in the first place.
It’s a lot of work to get a single new Linux distro working under `buildx`, even if you start with an existing third-party port such as the Mac PPC builds of Ubuntu. Good luck if you want to support an oddball CPU like TILE, though.
For instance, you might have an existing `Dockerfile` that says `FROM ubuntu:latest` at the top and are wanting to run it on a PPC router. While Ubuntu doesn’t ship any PPC OS images, there have been efforts to port Ubuntu to PPC Macs, and one of _those_ third-party distros might serve as an OCI container build base.
But then, having done so, you’re in a fresh jam when you try to rebuild an existing container that says “`FROM`” something else; `ubi9`, for instance. Do you repeat all that porting work for RHEL’s [UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image), or do you expend the lesser effort to port the container from RHEL to the Ubuntu image base you already have?
But then, having done so, you’re in a fresh jam when the next container you want to build says “`FROM`” something else; `ubi9`, for instance. It’s doubtful you will find a “RHEL for PPC Macs” type of OS distro, leading you to a second-best option of porting the container from RHEL to the Mac Ubuntu image base you already have.
Then you come across one of the huge number of containers based on Alpine, and you’re back in the soup again. While [its CPU support list](https://wiki.alpinelinux.org/wiki/Requirements) is broader than [the one for Ubuntu](https://ubuntu.com/cpu-compatibility), there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?
When you next come across one of the huge number of containers based on Alpine, you’ll be back in the soup once again. While [its CPU support list](https://wiki.alpinelinux.org/wiki/Requirements) is broader than [the one for Ubuntu](https://ubuntu.com/cpu-compatibility), there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?
Then there’s Debian, another popular OCI image base, one that’s been ported to a lot of strange platforms, but chances are that it was someone’s wild project, now abandoned. It’s likely the APT package repo isn’t working any more, for one, because who wants to host a huge set of packages for a dead project?
In brief, the reason MikroTik doesn’t ship `container.npk` for 32-bit PPC, 32-bit MIPS, and TILE is that there are few Linux distro images in OCI format to use as base images, and it isn’t greatly in their interest to pull that together along with the QEMU and `binfmt_misc` pieces for you, nor is it in the financial interest of Docker, Podman, etc.
There’s nothing stopping anyone reading this that has the skill and motivation to do this from doing so, but you’ll have to prove out your containers under emulation. Not until then do I see MikroTik being forced to take notice and provide a build of `container.npk` for that platform. It’s not quite a classic chicken-and-egg situation, but I can’t ignore the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351) on the forum.
Until someone breaks this logjam, it’s fair enough to say that RouterOS’s container runner only supports ARM and Intel CPUs.
Incidentally, exploration of the binfmts available to you on your container build host of choice might result in output like `linux/mips64le`, leaving you exulting, “See, there _is_ MIPS support!” But no. First off, this is 64-bit MIPS, while all MIPS CPUs shipped by MikroTik to this date have been 32-bit. Second, it’s [little-endian](https://en.wikipedia.org/wiki/Endianness) (LE) which means it wouldn’t work with the big-endian MIPS CPUs that were more popular historically. Third, even if you find/build a platform that includes support for the MIPSBE, MMIPS, and SMIPS CPU types MikroTik shipped, you’re likely back to lack of a base OS to build from.
# <a id="auto"></a>Automation
Included in the list of lacks [above](#global) is the [Docker Engine API][DEAPI]. The closest extant feature is the [RouterOS REST API][ROAPI], which can issue commands equivalent to those available at the CLI via `/container`. With this, you can programmatically add, remove, start, and stop containers, plus more.
What RouterOS does _not_ offer is a way for common control plane software like Docker Desktop or [Portainer](https://www.portainer.io/) to manage the containers running on your routers. This is because these programs were written with the assumption that everyone’s running Docker or Podman underneath, and as long as they stick to a compatible subset of the Docker Engine API, implementation details cease to matter up at these programs’ level of abstraction.
|