1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
|
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-
+
-
+
-
+
-
+
+
-
+
+
+
+
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
|
# Motivation
The [RouterOS `container.npk` feature](https://help.mikrotik.com/docs/display/ROS/Container) is highly useful, but it is a custom development written in-house by MikroTik, not a copy of Docker Engine. Because of the stringent resource constraints on the bulk of MikroTik's devices, it is very thinly featured compared to the big-boy server container engines. The purpose of this document is not to denigrate the RouterOS or its developers; RouterOS serves a particular market, and its developers are working within those constraints. The intent here, rather, is to provide a mapping between what people expect of a fully-featured container engine and what you get in RouterOS. Where it makes sense, I try to provide workarounds.
The [RouterOS `container.npk` feature](https://help.mikrotik.com/docs/display/ROS/Container) is highly useful, but it is a custom development written in-house by MikroTik, not a copy of Docker Engine or any of the other server-grade container engines.(^Podman, LXC/LXD, etc.) Because of the stringent resource constraints on the bulk of MikroTik's devices, it is exceptionally small, thus unavoidably very thinly featured compared to its big-boy competition. If we can use installed size as a proxy for expected feature set size, we find:
* **Docker Engine**: 422 MiB(^Version 27.1.1, according to `dnf remove docker-ce…` after installing these packages [per the instructions](https://docs.docker.com/engine/install/rhel/#install-docker-engine).)
* **Podman**: 107 MiB(^Version 4.9.4 on EL9, according to `sudo dnf remove podman conmon crun`.)
* **systemd-nspawn**: 1.3 MiB(^This is the bare-bones OCI image runner built into systemd, with a feature set fairly close to that of `container.npk`. The size above is for version 252 of this program's parent `systemd-container` package as shipped on EL9.)
* **`container.npk`**: _0.0626 MiB_(^Version 7.15.2, according to `/system/package/print`.)
And this is fine! RouterOS serves a particular market, and its developers are working within those constraints. The intent here is to provide a mapping between what people expect of a fully-featured container engine and what you actually get in RouterOS. Where it makes sense, I try to provide workarounds for missing features and guidance to alternative methods where RouterOS's way merely *works* differently.
<font color=red>This document is a **Work in Progress**.</font>
# General Observations
Allow me to present a distilled version of the details below, both to satisfy the **tl;dr** crowd and to set broad expectations for the rest of my readers.
RouterOS's `container.npk` lacks:
* a local image cache
* image building
* orchestration
* JSON and REST APIs
A good many of the `container.npk` limitations stem from those of RouterOS itself. For instance, while RouterOS proper is built atop Linux, and it provides a feature-rich CLI, it is nothing like a Linux command shell. This means equivalent commands to the likes of "`docker run --attach std…`" would not make a lot of sense on RouterOS, there being nothing like the termios/pty subsystem visible at the RouterOS CLI level.
While I could also point out the lack of a background management daemon(^`containerd` in modern setups, `dockerd` in old ones) a good bit of Docker's competition also lacks this, on purpose, so I cannot ding RouterOS for this same lack.
With this grounding, let us get to the per-command details…
# Top-Level Commands
For lack of any better organization principle, I've chosen to structure this document along the lines of the `docker` CLI, using their command hierarchy, in alphabetical order at each level. I skip over short aliases like `docker rmi` for `docker image rm` in order to cover things only once.
For lack of any better organization principle, I've chosen to structure this document along the lines of the `docker` CLI, duplicating their command hierarchy, sorted alphabetically at each level. I skip over short aliases like `docker rmi` for `docker image rm` in order to cover things only once. Because Podman cloned the Docker CLI, this matches fairly well with it, except that I do not currently go into any of its pure extensions, like its eponymous `pod` command.
## `attach`
## <a id="attach"></a>`attach`
There is no interactive terminal (stdin/stdout/stderr) in RouterOS to speak of. Containers normally run in the background, with logging suppressed by default. If you say `/container/set logging=yes`, the standard output streams go to the configured logging destination, but there is no way to interactively type commands at the container short of `/container/shell`, which requires that `/bin/sh` exist inside the container. Even then, you're typing commands at the shell, not at the container's `ENTRYPOINT` process.
In short, there is no equivalent in RouterOS to the common `docker run -it` invocation option.
## `build`/`buildx`
## <a id="build"></a>`build`/`buildx`
RouterOS provides a bare-bones container runtime only, not any of the image build tooling. It is closer in nature to the `runc` command underlying `containerd` than to Docker Engine proper. An even closer match is the lightweight `crun` command at the heart of Podman, and even more so the elementary runner that ships with systemd, variously called either [`systemd-nspawn`][sdnsp] or [`systemd-container`][sdcnt], depending on the tastes of whoever is packaging it.
[sdcnt]: https://packages.fedoraproject.org/pkgs/systemd/systemd-container/
[sdnsp]: https://wiki.archlinux.org/title/Systemd-nspawn
## `commit`
## <a id="commit"></a>`commit`
RouterOS doesn't maintain an image cache, thus has no way to commit changes made to the current image layer to a new layer.
It is for this same reason that removing and reinstalling the container re-downloads the image, even when done back-to-back, without starting the container and thereby making changes to the downloaded image.
## <a id="compose"></a>`compose`
## `cp`
RouterOS completely lacks multi-container orchestration features, including lightweight single-box ones like [Compose](https://docs.docker.com/compose/) or [Kind](https://kind.sigs.k8s.io) virtual clusters.
## <a id="cp"></a>`cp`
There is no direct equivalent of this command. The closest RouterOS comes is when you mount a volume, then use the regular `/file` facility to copy files in under that volume's mount point. There is no direct way to copy a file into the container proper, as you might when overwriting a stock config file.
## `create`/`load`/`run`
## <a id="create"></a>`create`/`load`/`run`
The RouterOS command `/container/add` provides a basic version of this, though with many limitations relative to a fully-featured container engine:
<font color=red>**TODO**</font>
RouterOS doesn't have separate top-level commands for creating a container from an OCI image registry versus loading it from a tarball. They're both `/container/add`, differing in whether you give the `remote-image` or `file` options, respectively.
RouterOS has no shorthand command like `docker run` for creating and starting a container in a single step. You must `add` it, then `start` it.
## `diff`
## <a id="diff"></a>`diff`
With no local image cache, there can be no equivalent command.
## `events`
## <a id="events"></a>`events`
RouterOS doesn't support container events.
## `exec`
## <a id="exec"></a>`exec`
There is no way in RouterOS to execute a command inside a running container short of `/container/shell`, which of course only works if there is a `/bin/sh` inside the container.
## `export`/`save`
## <a id="export"></a>`export`/`save`
There is no way to produce a tarball of a running container's filesystem or to save its state back to an OCI image tarball.
The [documented advice][imgtb] for getting such a tarball is to do this on the PC side via `docker` commands, then upload the tarball from the PC to the RouterOS device.
[imgtb]: https://help.mikrotik.com/docs/display/ROS/Container#Container-c)buildanimageonPC
## `history`
## <a id="history"></a>`history`
RouterOS doesn't keep this information.
## <a id="image"></a>`image`/`images`
RouterOS does not maintain a local image cache, thus has no need for any of the subcommands:
* `docker image ls` (a.k.a. `docker images`)
* `docker image prune`
* `docker image rm`
* `docker image tree`
## `import`
## <a id="import"></a>`import`
This is `/container/add file=oci-image.tar` in RouterOS.
## `info`
## <a id="info"></a>`info`
With the understanding that RouterOS has far fewer configurables than a big-boy container engine, the closest command to this in RouterOS is `/container/config/print`. The output is in typical RouterOS "print" format, not JSON.
## `inspect`
## <a id="inspect"></a>`inspect`
The closest approximation to this in RouterOS is
/container/print detail where …
You get only a few lines of information back from this, mainly what you gave it to create the container from the image. You will not get the pages of JSON data the Docker CLI gives.
|
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
|
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
|
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
-
+
+
-
+
+
+
+
+
-
+
-
+
+
-
+
+
-
+
|
…much less per-container settings as you get in Docker, Podman, LXC, etc.
[caps]: https://www.man7.org/linux/man-pages/man7/capabilities.7.html
[rlimit]: https://www.man7.org/linux/man-pages/man2/getrlimit.2.html
## `kill`/`stop`
## <a id="kill" name="stop"></a>`kill`/`stop`
RouterOS doesn't make a distinction between "kill" and "stop". The `/container/stop` command behaves more like `docker kill` or `docker stop -t0` in that it doesn't try to bring the container down gracefully before giving up and killing it.
## `login`/`logout`
## <a id="login"></a>`login`/`logout`
RouterOS only allows you to configure a single image registry, including the login parameters:
/container/config/set registry-url=… username=… password=…
The only way to "log out" is to overwrite the username and password via:
/container/config/set username="" password=""
## `logs`
## <a id="logs"></a>`logs`
By default, RouterOS drops all logging output from a container. To see it, you must enable it on a per-container basis with the `/container/add logging=yes` option, then tell RouterOS where to send those logs via a `/system/logging add topics=container …` command.
Each message is handled in real time, not buffered as with Docker or Podman. Furthermore, RouterOS mixes logs from all sources for a given "topic" set, which in this context means that if you have multiple running containers on the device, their logs all go to the same place. Thus, if you were expecting to be able to set up memory logging for a container, log out of the router, then sometime later come back in and get a dump of everything that one particular container has logged since the last time you asked — as you can with the big-boy container engines — then you will be disappointed.
## `pause`/`unpause`
## <a id="pause"></a>`pause`/`unpause`
No such feature in RouterOS; a container is running or not.
If the container has a shell, you could try a command sequence like this to get the same effect:
> /container/shell 0
$ pkill -STOP 'name of process'
## `port`
## <a id="port"></a>`port`
RouterOS exposes all ports defined for a container in the `EXPOSE` directive in the `Dockerfile`. The only way to instantiate a container with fewer exposed ports is to rebuild it or override it with a different `EXPOSE` value.
## `ps`/`stats`/`top`
## <a id="ps"></a>`ps`/`stats`/`top`
The closest thing in RouterOS is the `/container/print follow*` commands.
A more direct alternative would be to shell into the container and run whatever it has for a `top` command, but of course that is contingent on what is available, if indeed there is a shell at all.
## `push`/`pull`
## <a id="push"></a>`push`/`pull`
RouterOS maintains no local image cache, thus cannot push or pull images.
While it _can_ pull from an OCI image repo, it does so as part of `/container/add`, which is closer to a `docker create` command than to `docker pull`.
There is no equivalent at all to `docker push`.
## `rename`
## <a id="rename"></a>`rename`
RouterOS doesn't let you set the name on creation, much less rename it later. The closest you can come to this is to add a custom `comment`, which you can both set at "`add`" time and after creation.
## `restart`
## <a id="restart"></a>`restart`
RouterOS doesn't provide this shortcut. You must stop it and then start it again manually.
## `rm`
## <a id="rm"></a>`rm`
RouterOS spells this `/container/remove`, but do be aware, there is no equivalent for `docker rm -f` to force the removal of a running container. RouterOS makes you stop it first.
## `search`
## <a id="search"></a>`search`
There is no equivalent to this in RouterOS. You will need to connect to your image registry of choice and use its search engine.
## `start`
## <a id="start"></a>`start`
RouterOS has `/container/start`, but with many limitations relative to `docker start`:
<font color=red>**TODO**</font>
## <a id="swarm"></a>`swarm`
## `tag`
Extending from the lack of single-box container orchestration features, RouterOS also completely lacks _cluster_ orchestration. It doesn't even have a lightweight one like [Docker Swarm](https://docs.docker.com/engine/swarm/) or [k3s](https://k3s.io), and it certainly doesn't support the behemoth that is Kubernetes.
## <a id="tag"></a>`tag`
RouterOS does nothing more with tags than to select which image to download from a registry. Without a local image cache, you cannot re-tag an image.
## `update`
## <a id="update"></a>`update`
No equivalent short of this:
/container/stop 0
…wait for it to stop…
/container/remove 0
/container/add …
The last step is the tricky one since `/container/print` shows most but not all of the options you gave to create it. If you didn't write down how you did that, you're going to have to work that out to complete the command sequence.
## `version`
## <a id="version"></a>`version`
While RouterOS's `container.npk` technically does have an independent version number of its own, it is meant to always match that of the `routeros.npk` package you have installed. RouterOS automatically upgrades both in lock-step, making this the closest equivalent command:
RouterOS's `container.npk` has no independent version number of its own. You use the version of the package matching the `routeros.npk` installed. RouterOS upgrades the container package automatically when it upgrades everything else.
/system/package/print
## `wait`
## <a id="wait"></a>`wait`
The closest equivalent to this would be to call `/container/stop` in a RouterOS script and then poll on `/container/print where …` until it stopped.
|