MikroTik Solutions

Changes To Container Limitations
Login

Changes To Container Limitations

Changes to "Container Limitations" between 2024-07-26 23:41:27 and 2024-07-27 00:27:16

1
2
3
4
5

6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

22
23
24
25
26
27
28
1
2
3
4

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

21
22
23
24
25
26
27
28




-
+















-
+







# Motivation

The [RouterOS `container.npk` feature](https://help.mikrotik.com/docs/display/ROS/Container) is highly useful, but it is a custom development written in-house by MikroTik, not a copy of Docker Engine or any of the other server-grade container engines.(^Podman, LXC/LXD, etc.) Because of the stringent resource constraints on the bulk of MikroTik’s devices, it is exceptionally small, thus unavoidably very thinly featured compared to its big-boy competition. If we can use installed size as a proxy for expected feature set size, we find:

* **Docker Engine**: 422 MiB(^Version 27.1.1, according to `dnf remove docker-ce…` after installing these packages [per the instructions](https://docs.docker.com/engine/install/rhel/#install-docker-engine).)
* **Docker Engine**: 422 MiB(^Version 27.1.1, according to `dnf remove docker-ce…` after installing these packages [per the instructions](https://docs.docker.com/engine/install/rhel/#install-docker-engine). Note also that this is the “engine” alone, leaving out the extra gigabyte of stuff that makes up Docker Desktop. This is what you'd run on a remote server, the closest situation to what a headless RouterOS box provides.)
* **`containerd`+`nerdctl`**: 174 MiB(^This is essentially Docker Engine minus the build tooling. The size is for version 2.0.0-rc1 of `nerdctl` plus the `containerd` from the Docker Engine CE install above, according to `sudo dnf remove containerd` and `du -sh nerdctl`.)
* **Podman**: 107 MiB(^Version 4.9.4 on EL9, according to `sudo dnf remove podman conmon crun`.)
* **`container.npk`**: _0.0626 MiB_(^Version 7.15.2, according to `/system/package/print`.)

And this is fine! RouterOS serves a particular market, and its developers are working within those constraints. The intent here is to provide a mapping between what people expect of a fully-featured container engine and what you actually get in RouterOS. Where it makes sense, I try to provide workarounds for missing features and guidance to alternative methods where RouterOS’s way merely *works* differently.


# <a id="global"></a>Global Limitations

Allow me to begin with the major limitations visible at a global level in the RouterOS `container.npk` feature, both to satisfy the **tl;dr** crowd and to set broad expectations for the rest of my readers. This super-minimal container implementation lacks:

*   orchestration
*   image building
*   a local image cache
*   JSON and REST APIs
*   a CoW/overlay file system(^This is not a verified fact, but an inference based on the observation that if RouterOS _did_ have this facility underlying its containers, I would expect to find equivalents to Docker’s `commit` and `diff` commands. This pairs with the lack of an image cache: no CoW means no need for a baseline source to compute deltas against.)
*   a [CoW]/overlay file system(^This is not a verified fact, but an inference based on the observation that if RouterOS _did_ have this facility underlying its containers, I would expect to find equivalents to Docker’s `commit` and `diff` commands. This pairs with the lack of an image cache: no [CoW] means no need for a baseline source to compute deltas against.)
*   per-container limit controls:(^The only configurable resource limit is on maximum RAM usage, and it’s global, not settable on a per-container basis.)
    *   FD count
    *   PID limit
    *   CPU usage
    *   storage IOPS
    *   `/dev/shm` size limit
    *   terminal/logging bps
40
41
42
43
44
45
46

47
48
49


50
51
52
53
54
55
56
40
41
42
43
44
45
46
47
48


49
50
51
52
53
54
55
56
57







+

-
-
+
+







* **systemd-nspawn**: 1.3 MiB(^[This][sdnsp] is the bare-bones OCI image runner built into systemd, with a feature set fairly close to that of `container.npk`. The size above is for version 252 of this program’s parent [`systemd-container`][sdcnt] package as shipped on EL9.)

One reason `container.npk` is far smaller than even the smallest of these runners is that the engines delegate much of what RouterOS lacks to the runner, so that even then it’s an unbalanced comparison. The [`kill`](#kill), [`ps`](#ps), and [`pause`](#pause) commands missing from `container.npk` are provided in Docker Engine way down at the `runc` level, not up at the top-level CLI.

With this grounding, let us dive into the details.

[caps]:   https://www.man7.org/linux/man-pages/man7/capabilities.7.html
[CoW]:    https://en.wikipedia.org/wiki/Copy-on-write
[rlimit]: https://www.man7.org/linux/man-pages/man2/getrlimit.2.html
[sdcnt]: https://packages.fedoraproject.org/pkgs/systemd/systemd-container/
[sdnsp]: https://wiki.archlinux.org/title/Systemd-nspawn
[sdcnt]:  https://packages.fedoraproject.org/pkgs/systemd/systemd-container/
[sdnsp]:  https://wiki.archlinux.org/title/Systemd-nspawn


## <a id="create" name="load"></a>Container Creation

The single biggest area of difference between the likes of Docker and the RouterOS `container.npk` feature is how you create containers from OCI images. It combines Docker’s `create` and `load` commands under `/container/add`, the distinction expressed by whether you give it the `remote-image` or `file` option, respectively.

Given the size of the output from `docker create --help`, it should not be surprising that the bulk of that is either not available in RouterOS or exists in a very different form. Most of these limitations stem from [the list above](#global). For instance, the lack of any CPU usage limit features means there is no equivalent under `/container` for the several `docker create --cpu*` options. Rather than go into these options one by one, I’ll cover the ones where the answers cannot be gleaned through a careful reading of the rest of this article:
178
179
180
181
182
183
184




























185
186
187
188
189
190
191
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220







+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+








    /container/{add,set} … logging=yes
    /system/logging add topics=container action=…

Having done so, we have a new limitation to contend with: RouterOS logging isn’t as powerful as the Docker “`logs`” command, which by default works as if you asked it, “Tell me what this particular container logged since the last time I asked.” RouterOS logging, on the other hand, mixes everything together in real time, requiring you to dig through the history manually.

(The same is true of `podman logs`, except that it ties into systemd’s unified “journal” subsystem, a controversial design choice that ended up paying off handsomely when Podman came along and wanted to pull up per-container logs to match the way Docker behaved.)


# <a id="cpu"></a>CPU Limitations

There are two subclasses:


## <a id="emu"></a>There Is No CPU Emulation

Docker and Podman both let you transparently emulate foreign CPU types, which lets you run an image built for another CPU architecture on your local system. If you are on an x86_64 host, this should drop you into a command shell:

     $ docker run --rm -it --platform linux/arm64 alpine:latest

The same will work on recent versions of Podman, and you can get it to work on old versions of Podman with a bit of manual setup.(^It's off-topic to go into the details here, but it amounts to “`podman machine ssh`” followed by a “`dnf install qemu-static-*`” command.)

For that to work under `container.npk`, they would have to do the same type of thing: ship the QEMU and Linux kernel binfmt bridges needed to get the OS to accept these "foreign" binaries. Since it would approximately double the size of RouterOS to do this for all the popular CPU architectures, they naturally do not do this.

What this means in practice is that you have to be sure the images you want to use were built for the CPU type in your RouterOS device. This is true even between closely-related platforms. An ARM64 router won't run a 32-bit ARMv7 image, if only because it is assuming a 32-bit Linux kernel syscall interface.


## <a id="compat"></a>It Only Supports Intel and ARM

MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, even in new products like their CRS5xx line. Atop that, there are other CPU architectures in the mix like PowerPC and TILE. MikroTik doesn't ship a `container.npk` for any of these platforms.

But why?

The simple reason is, the major container build toolchains — mainly `docker buildx` and its clones - don't target these other CPU types.



# <a id="tlc"></a>Remaining Top-Level Commands

So ends my coverage of the heavy points. Everything else we can touch on briefly, often by reference to matters covered previously.

For lack of any better organization principle, I’ve chosen to cover the remaining `docker` CLI commands in alphabetical order. I skip over short aliases like `docker rmi` for `docker image rm` in order to cover things only once, and I don’t repeat any of the `create`/`load`/`run` discussion [above](#create). Because Podman cloned the Docker CLI, this ordering matches up fairly well with its top-level command structure as well, the primary exception being that I do not currently go into any of Podman’s pure extensions, ones such as its eponymous `pod` command.