MikroTik Solutions

Changes To Container Limitations
Login

Changes To Container Limitations

Changes to "Container Limitations" between 2025-01-20 22:45:49 and 2025-01-24 09:22:13

13
14
15
16
17
18
19
20

21
22
23
24
25
26
27
13
14
15
16
17
18
19

20
21
22
23
24
25
26
27







-
+







# <a id="global"></a>Global Limitations

Allow me to begin with the major limitations visible at a global level in the RouterOS `container.npk` feature, both to satisfy the **tl;dr** crowd and to set broad expectations for the rest of my readers. This super-minimal container implementation lacks:

*   orchestration
*   rootless mode
*   image building
*   local image cache
*   [local image cache](#cache)
*   [Docker Engine API][DEAPI]
*   volume storage manager
*   [CoW]/overlay file system(^This is not a verified fact, but an inference based on the observation that if RouterOS _did_ have this facility underlying its containers, several other limitations covered here would not exist.)
*   per-container limit controls:(^The only configurable resource limit is on maximum RAM usage, and it’s global, not settable on a per-container basis.)
    *   FD count
    *   PID limit
    *   CPU usage
88
89
90
91
92
93
94
95

96
97
98
99
100
101
102
88
89
90
91
92
93
94

95
96
97
98
99
100
101
102







-
+








*   **`--pid/uts`**: The RouterOS container runner must use Linux namespaces under the hood, but it does not offer you control over which PID, file, network, user, etc. namespaces each container uses. See also [this](#root).

*   **`--read-only`**: RouterOS offers precious little in terms of file system permission adjustment. As a rule, it is best to either shell into the container and adjust permissions there or rebuild the container with the permissions you want from go. Any expectations based on being able to adjust any of this between image download time and container creation time are likely to founder.

*   **`--restart`**: <a id="restart"></a>The closest RouterOS gets to this is its `start-on-boot` setting, meaning you’d have to reboot the router to get the container to restart. If you want automatic restarts, you will have to [script] it.

*   **`--rm`**: No direct equivalent. There is a manual `/container/remove` command, but nothing like this option, which causes the container runtime to automatically remove the instantiated container after it exits. It’s just as well since this option is most often used when running _ad hoc_ containers made from a previously downloaded image; RouterOS’s lack of an image cache means you have to go out of your way to export a tarball of the image and upload it to the router, then use “`/container/add file=…`” if you want to avoid re-downloading the image from the repository on each relaunch.
*   **`--rm`**: No direct equivalent, and until we get a `run` command and an image cache, it's difficult to justify adding it.(^There is a manual `/container/remove` command, but it does something rather different.)

*   **`--volume`**: This is largely covered under `--mount` above, but it’s worth repeating that `container.npk` has no concept of what Docker calls “volumes;” it _only_ has bind-mounts. In that sense, RouterOS does not blur lines as Docker and Podman attempt to do in their handling of the `--volume` option.

That brings us to the related matter of…

[script]: https://help.mikrotik.com/docs/display/ROS/Scripting

198
199
200
201
202
203
204



















205
206
207
208
209
210
211
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230







+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+







    /container/{add,set} … logging=yes
    /system/logging add topics=container action=…

Having done so, we have a new limitation to contend with: RouterOS logging isn’t as powerful as the Docker “`logs`” command, which by default works as if you asked it, “Tell me what this particular container logged since the last time I asked.” RouterOS logging, on the other hand, mixes everything together in real time, requiring you to dig through the history manually.

(The same is true of `podman logs`, except that it ties into systemd’s unified “journal” subsystem, a controversial design choice that ended up paying off handsomely when Podman came along and wanted to pull up per-container logs to match the way Docker behaved.)


# <a id="cache"></a>There Is No Local Image Cache

I stated this [in the list above](#global), but what does that mean in practice? What do we lose as a result?

A surprising number of knock-on effects result from this lack:

1.  Registries with pull-rate limiting are more likely to refuse you during experimentation as you repeatedly reinstantiate a container trying to get it to work. This can be infuriating when it happens in the middle of a hot-and-heavy debugging session.

    The pricing changes made to Docker Hub in late 2024 play into this. They're now imposing a limit of 200 pulls per user per 6 hours for users on the free tier, where before they had an unlimited-within-reason policy for public repos. You can give RouterOS a Docker Hub user login name and a CLI token ("`password`") to work around that, saving you from the need to compete with all the other anonymous users pulling that image, including random bots on the Internet.

    The thing is, if RouterOS had an image cache, you would only have to pull the image once as long as you keep using the same remote image URL, as when trying out different settings. That would let you side-step the whole mess.

2.  If the container provides DNS, you may end up in a chicken-and-egg situation where the old container is down but now the router can't pull from the remote registry (e.g. Docker Hub) because it can no longer resolve `registry-1.docker.io`. An image cache solves this problem by allowing the runtime to pull the new image while the prior one still runs, then do the swap with both versions of the image in the cache. It even allows clever behavior like health checks to gate whether to continue with the swap or trigger a rollback.

3.  Equivalents for several of the "missing" commands [listed below](#tlc) cannot be added to `container.npk` without adding an image cache first: `commit`, `diff`, `pull`, etc.(^To be fair, a number of these commands only need to exist in the big-boy engines _because of_ the image cache: `rmi`, `prune`, etc.)

A broad workaround for _some_ of the above is having the foresight to pull the image using Docker or Podman, then save the image out as a tarball and using `/container/add file=` instead of `remote-image`. There are landmines along this path owing to the [OCI compatibility issue](#compliance) covered separately below.


# <a id="root"></a>Everything Is Rootful

This shows up in a number of guises, but the overall effect is that all containers run as a nerfed `root` user under `container.npk`, same as Docker did from the start. This remains the Docker default, but starting with the 20.10 release, it finally got a [rootless mode][drl] to compete with [Podman’s rootless-by-default][prl] nature. I bring up this history to show that RouterOS is not unconditionally “wrong” to operate as it does, merely limited.

This design choice may be made reasonably safe through the grace of [user namespaces](https://www.man7.org/linux/man-pages/man7/user_namespaces.7.html), which cause the in-container `root` user to be meaningfully different from the Linux `root` user that RouterOS itself runs as. RouterOS does have a `/user` model, but they are not proper Linux users as understood by the kernel, with permissions enforced by Linux user IDs; RouterOS users have _no meaningful existence at all_ inside the container. One practical effect of this is that when you start a container as RouterOS user `fred`, you will not find a `fred` entry in its `/etc/passwd` file, and if you create one at container build time (e.g. with a `RUN useradd` command) it will not be the same `fred` as the RouterOS user on the outside.

400
401
402
403
404
405
406
407
408



409
410

411
412


413
414
415
416
417
418
419
419
420
421
422
423
424
425


426
427
428
429

430
431
432
433
434
435
436
437
438
439
440
441







-
-
+
+
+

-
+


+
+







This is `/container/add file=oci-image.tar` in RouterOS.


## <a id="info" name="inspect"></a>`info`/`inspect`

With the understanding that RouterOS has far fewer configurables than a big-boy container engine, the closest commands in RouterOS are:

* `/container/print detail where …`
* `/container/config/print`
* `/container/config/print`
* `/container/print detail where …`
* `:put [:serialize value=[/container/get 0] to=json options=json.pretty]`

Their output is in typical RouterOS “print” format, not JSON, and you get only a few lines of information back from each, not the pages of details the Docker CLI gives.
That last one was crafted by @Nick on the [MikroTik Discord][MTDisc]. It gives a pretty-printed JSON version of what you get from the second command, which is useful when automating `/container` commands via SSH, as with Ansible. Even so, it's far short of the pages and pages of detail you get from the Docker and Podman CLI equivalents.

A related limitation is that configurable parameters are often global in RouterOS, set for all containers running on the box, not available to be set on a per-container basis. A good example of this is the memory limit, set via `/container/config/set ram-high=…`.

[MTDisc]: https://discord.gg/exGj6whYw7


## <a id="kill" name="stop"></a>`kill`/`stop`

RouterOS doesn’t make a distinction between “kill” and “stop”. The `/container/stop` command behaves more like `docker kill` or `docker stop -t0` in that it doesn’t try to bring the container down gracefully before giving up and killing it.


487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
509
510
511
512
513
514
515




516
517
518
519
520
521
522







-
-
-
-







See also [`--restart`](#restart) above.


## <a id="rm"></a>`rm`

RouterOS spells this `/container/remove`, but do be aware, there is no equivalent for `docker rm -f` to force the removal of a running container. RouterOS makes you stop it first.

Another knock-on effect to be aware of stems from the lack of a local image cache: removing a container and reinstalling it from the *same* remote image requires RouterOS to re-download the image, even when done back-to-back, even if you never start the container between and thereby cause it to make changes to the expanded image’s files. You can end up hitting annoying rate-limiting on the “free” registries in the middle of a hot-and-heavy debugging session due to this. Ask me how I know. 😁

The solution is to produce an [OCI] image tarball in the [format subset](#compliance) that `/container/add file=…` will accept.


## <a id="search"></a>`search`

There is no equivalent to this in RouterOS. You will need to connect to your image registry of choice and use its search engine.


## <a id="secret"></a>`secret`