︙ | | |
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
|
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
-
+
-
-
|
* CPU usage
* storage IOPS
* `/dev/shm` size limit
* terminal/logging bps
* [capability][caps] restrictions
* [seccomp profiles](https://docs.docker.com/engine/security/seccomp/)
* [rlimit]
* hardware pass-thru:
* [limited hardware pass-thru](#hw)
* USB and serial `/dev` node pass-thru is [on the wish list](https://forum.mikrotik.com/viewtopic.php?p=1109498&hilit=serial#p1109498) and “[very close to final version](https://forum.mikrotik.com/viewtopic.php?t=196817#p1139555),” but an implementation is not yet publicly available.(^RouterOS itself may see the USB device and offer indirect access to it. This is the case with USB storage formatted with a compatible file system, which you can bind-mount into the container with “`/container/add mounts=…`”.)
* There is no GPU support, not even for bare-metal x86 installs.
Lack of a management daemon(^`containerd` in modern setups, `dockerd` in old ones) is not in that list because a good bit of Docker’s competition also lacks this, on purpose. Between that and the other items on the list, the fairest comparison is not to fully-featured container *engines* like Docker and Podman but to the container *runner* at their heart:
* **runc**: 14 MiB(^This is the runner underpinning `containerd`, thus also Docker, although it precedes it. Long before they created `containerd`, it underpinned `dockerd` instead. Because it is so primordial, a good many other container engines are also based on it.)
* **systemd-nspawn**: 1.3 MiB(^[This][sdnsp] is the bare-bones [OCI] image runner built into systemd, with a feature set fairly close to that of `container.npk`. The size above is for version 252 of this program’s parent [`systemd-container`][sdcnt] package as shipped on EL9.)
* **crun**: 0.5 MiB(^This is Podman’s alternative to `runc`, written in C to make it smaller. Early versions of Podman once relied on `runc`, and it can still be configured to use it, but the new default is to use the slimmer but feature-equivalent `crun`.)
|
︙ | | |
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
|
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
|
-
-
+
+
-
+
-
+
|
The single biggest area of difference between the likes of Docker and the RouterOS `container.npk` feature is how you create containers from [OCI] images. It combines Docker’s `create` and `load` commands under `/container/add`, the distinction expressed by whether you give it the `remote-image` or `file` option, respectively.
Given the size of the output from `docker create --help`, it should not be surprising that the bulk of that is either not available in RouterOS or exists in a very different form. Most of these limitations stem from [the list above](#global). For instance, the lack of any CPU usage limit features means there is no equivalent under `/container` for the several `docker create --cpu*` options. Rather than go into these options one by one, I’ll cover the ones where the answers cannot be gleaned through a careful reading of the rest of this article:
* **`--env`**: The equivalent is this RouterOS command pair:
/container/envs/add name=NAME …
/container/add envlist=NAME …
/container/envs/add name=NAME…
/container/add envlist=NAME…
This is in fact closer to the way the **`--env-file`** option works, except that under RouterOS, this particular “file” isn’t stored under `/file`!
* **`--expose`/`--publish`**: <a id="publish"></a>The VETH you attach the container to makes every listening socket visible by default; the `EXPOSE` directive given in your `Dockerfile` is completely ignored. Everything the big-boy container engines do related to this is left up to you, the RouterOS administrator, to do manually:
* block **unwanted** services exposed within the container with `/ip/firewall/filter` rules
* port-forward **wanted** services in via `dstnat` rules
* **`--health-cmd`**: Because health-checks are often implemented by periodic API calls to verify that the container continues to run properly, the logical equivalent under RouterOS is to [script] calls to [`/fetch`](https://help.mikrotik.com/docs/display/ROS/Fetch), which then issues `/container/{stop,start}` calls to remediate any problems it finds.
* **`--init`**: Although there is no direct equivalent to this in RouterOS, nothing stops you from doing it the old-school way, creating a container that calls “`ENTRYPOINT /sbin/init`” or similar, which then starts the subordinate services inside that container. It would be somewhat silly to use systemd for this in a container meant to run on RouterOS in particular; a more suitable alternative would be [Alpine’s OpenRC](https://wiki.alpinelinux.org/wiki/OpenRC) init system, a popular option for managing in-container services.
* **`--label`**: The closest equivalent is RouterOS’s `comment` facility, which you can apply to a running container with “`/container/set 0 comment=MYLABEL`”.
* **`--mac-address`**: If RouterOS had this, I would expect it to be offered as “`/interface/veth/set mac-address=…`”, but that does not currently exist. As it stands, a VETH interface’s MAC address is random, same as the default behavior of Docker.
* **`--mount`**: The closest equivalent to this in RouterOS is quite different, being the `/container/mounts/add` mechanism. The fact that you create this ahead of instantiating the container might make you guess this to be a nearer match to a “`docker volume create …`” command, but alas, there is no container volume storage manager. In Docker-speak, RouterOS offers bind-mounts only, not separately-managed named volumes that only containers can see.
* **`--mount`**: The closest equivalent to this in RouterOS is quite different, being the `/container/mounts/add` mechanism. The fact that you create this ahead of instantiating the container might make you guess this to be a nearer match to a “`docker volume create…`” command, but alas, there is no container volume storage manager. In Docker-speak, RouterOS offers bind-mounts only, not separately-managed named volumes that only containers can see.
Atop this, `container.npk` can bind-mount whole directories only, not single files as Docker and Podman allow. This can be a particular problem when trying to inject a single file under `/etc` since it tends to require that you copy in all of the “peer” files in that same subdirectory hierarchy merely to override one of them.
* **`--network`**: This one is tricky. While there is certainly nothing like “`/container/add network=…`”, it’s fair to say the equivalent is, “RouterOS.” You are, after all, running this container atop a highly featureful network operating system. Bare-bones the `container.npk` runtime may be, but any limitations you run into with the network it attaches to are more a reflection of your imagination and skill than to lack of command options under `/container`.
* **`--pid/uts`**: The RouterOS container runner must use Linux namespaces under the hood, but it does not offer you control over which PID, file, network, user, etc. namespaces each container uses. See also [this](#root).
* **`--read-only`**: RouterOS offers precious little in terms of file system permission adjustment. As a rule, it is best to either shell into the container and adjust permissions there or rebuild the container with the permissions you want from go. Any expectations based on being able to adjust any of this between image download time and container creation time are likely to founder.
* **`--restart`**: <a id="restart"></a>The closest RouterOS gets to this is its `start-on-boot` setting, meaning you’d have to reboot the router to get the container to restart. If you want automatic restarts, you will have to [script] it.
* **`--rm`**: No direct equivalent, and until we get a `run` command and an image cache, it's difficult to justify adding it.(^There is a manual `/container/remove` command, but it does something rather different.)
* **`--rm`**: No direct equivalent, and until we get a `run` command and an image cache, it’s difficult to justify adding it.(^There is a manual `/container/remove` command, but it does something rather different.)
* **`--volume`**: This is largely covered under `--mount` above, but it’s worth repeating that `container.npk` has no concept of what Docker calls “volumes;” it _only_ has bind-mounts. In that sense, RouterOS does not blur lines as Docker and Podman attempt to do in their handling of the `--volume` option.
That brings us to the related matter of…
[script]: https://help.mikrotik.com/docs/display/ROS/Scripting
|
︙ | | |
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
|
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
|
-
+
-
+
-
+
-
+
|
sh-5.1# exit
…may end up expressed under RouterOS as…
> /container
> add remote-image=alpine:latest veth=veth1 entrypoint=sleep cmd=3600
> print
… nope, still downloading, wait …
… nope, still downloading, wait…
> print
… nope, still extracting, wait longer …
… nope, still extracting, wait longer…
> print
… oh, good, got the container ID …
… oh, good, got the container ID…
> start 0
… wait for it to launch …
… wait for it to launch…
> shell 0
sh-5.1# <do something inside the container>
sh-5.1# exit
> stop 0
> remove 0
Whew! 😅
|
︙ | | |
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
|
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
|
-
+
-
+
-
+
-
+
|
But how then do you say something akin to the following under RouterOS?
docker run -it alpine:latest ls -lR /etc
You might want to do that in debugging to find out what a given config file is called and exactly where it is in the hierarchy so that you can target it with a `mount=…` override. If you try to pass it all as…
/container/add … entrypoint="ls -lR /etc"
/container/add… entrypoint="ls -lR /etc"
…the kernel will complain that there is no command in the container’s `PATH` called “`ls -lR /etc`”.
You may then try to split it as…
/container/add … entrypoint="ls" cmd="-lR /etc"
/container/add… entrypoint="ls" cmd="-lR /etc"
…but that will earn you a refusal by `/bin/ls` to accept “ ” (space) as an option following the `R`!
If you get cute and try to “cuddle” the options with the arguments as…
/container/add … entrypoint="ls" cmd="-lR/etc"
/container/add… entrypoint="ls" cmd="-lR/etc"
…the `/bin/ls` implementation will certainly attempt to treat `/` as an option and die with an error message.(^Yes, for certain. I tested the GNU, BSD, _and_ BusyBox implementations of `ls`, and they all do this.)
Things aren’t always this grim. For instance, you can run [my `iperf3` container](/dir/iperf3) as a client instead of its default server mode by saying something like:
/container/add … cmd="-c192.168.88.99"
/container/add… cmd="-c192.168.88.99"
This relies on the fact that the `iperf3` command parser knows how to break the host name part out from the `-c` option itself, something not all command parsers are smart enough to do. There’s 50 years of Unix and Linux history encouraging programs to rely on the shell to do a lot of work before the program’s `main()` function is even called. The command line processing that `container.npk` applies to its `cmd` argument lacks all that power. If you want Bourne shell parsing of your command line, you have to set it via `ENTRYPOINT` or `CMD` in the `Dockerfile`, then rebuild the image.
There is one big exception to all this: a common pattern is to have the `ENTRYPOINT` to a container be a shell script and for that to do something like this at the end:
/path/to/actual/app $@
|
︙ | | |
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
|
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
|
-
+
-
+
-
+
-
+
-
+
|
# <a id="logs"></a>Log Handling
Although Docker logging is tied into this same Linux terminal I/O design, we cannot blame the lack of an equivalent to “`docker logs`” on the RouterOS design principles in the same manner as [above](#terminal). The cause here is different, stemming first from the fact that RouterOS boxes try to keep logging to a minimum by default, whereas Docker logs everything the container says, without restriction. RouterOS takes the surprising default of logging to volatile RAM in order to avoid burning out the flash. Additionally, it ignores all messages issued under “topics” other than the four preconfigured by default, which does not include the “container” topic you get access to by installing `container.npk`.
To prevent your containers’ log messages from being sent straight to the bit bucket, you must say:
/container/{add,set} … logging=yes
/container/{add,set}… logging=yes
/system/logging add topics=container action=…
Having done so, we have a new limitation to contend with: RouterOS logging isn’t as powerful as the Docker “`logs`” command, which by default works as if you asked it, “Tell me what this particular container logged since the last time I asked.” RouterOS logging, on the other hand, mixes everything together in real time, requiring you to dig through the history manually.
(The same is true of `podman logs`, except that it ties into systemd’s unified “journal” subsystem, a controversial design choice that ended up paying off handsomely when Podman came along and wanted to pull up per-container logs to match the way Docker behaved.)
# <a id="cache"></a>There Is No Local Image Cache
I stated this [in the list above](#global), but what does that mean in practice? What do we lose as a result?
A surprising number of knock-on effects result from this lack:
1. Registries with pull-rate limiting are more likely to refuse you during experimentation as you repeatedly reinstantiate a container trying to get it to work. This can be infuriating when it happens in the middle of a hot-and-heavy debugging session.
The pricing changes made to Docker Hub in late 2024 play into this. They're now imposing a limit of 200 pulls per user per 6 hours for users on the free tier, where before they had an unlimited-within-reason policy for public repos. You can give RouterOS a Docker Hub user login name and a CLI token ("`password`") to work around that, saving you from the need to compete with all the other anonymous users pulling that image, including random bots on the Internet.
The pricing changes made to Docker Hub in late 2024 play into this. They’re now imposing a limit of 200 pulls per user per 6 hours for users on the free tier, where before they had an unlimited-within-reason policy for public repos. You can give RouterOS a Docker Hub user login name and a CLI token (“`password`”) to work around that, saving you from the need to compete with all the other anonymous users pulling that image, including random bots on the Internet.
The thing is, if RouterOS had an image cache, you would only have to pull the image once as long as you keep using the same remote image URL, as when trying out different settings. That would let you side-step the whole mess.
2. If the container provides DNS, you may end up in a chicken-and-egg situation where the old container is down but now the router can't pull from the remote registry (e.g. Docker Hub) because it can no longer resolve `registry-1.docker.io`. An image cache solves this problem by allowing the runtime to pull the new image while the prior one still runs, then do the swap with both versions of the image in the cache. It even allows clever behavior like health checks to gate whether to continue with the swap or trigger a rollback.
2. If the container provides DNS, you may end up in a chicken-and-egg situation where the old container is down but now the router can’t pull from the remote registry (e.g. Docker Hub) because it can no longer resolve `registry-1.docker.io`. An image cache solves this problem by allowing the runtime to pull the new image while the prior one still runs, then do the swap with both versions of the image in the cache. It even allows clever behavior like health checks to gate whether to continue with the swap or trigger a rollback.
3. Equivalents for several of the "missing" commands [listed below](#tlc) cannot be added to `container.npk` without adding an image cache first: `commit`, `diff`, `pull`, etc.(^To be fair, a number of these commands only need to exist in the big-boy engines _because of_ the image cache: `rmi`, `prune`, etc.)
3. Equivalents for several of the “missing” commands [listed below](#tlc) cannot be added to `container.npk` without adding an image cache first: `commit`, `diff`, `pull`, etc.(^To be fair, a number of these commands only need to exist in the big-boy engines _because of_ the image cache: `rmi`, `prune`, etc.)
A broad workaround for _some_ of the above is having the foresight to pull the image using Docker or Podman, then save the image out as a tarball and using `/container/add file=` instead of `remote-image`. There are landmines along this path owing to the [OCI compatibility issue](#compliance) covered separately below.
# <a id="root"></a>Everything Is Rootful
This shows up in a number of guises, but the overall effect is that all containers run as a nerfed `root` user under `container.npk`, same as Docker did from the start. This remains the Docker default, but starting with the 20.10 release, it finally got a [rootless mode][drl] to compete with [Podman’s rootless-by-default][prl] nature. I bring up this history to show that RouterOS is not unconditionally “wrong” to operate as it does, merely limited.
This design choice may be made reasonably safe through the grace of [user namespaces](https://www.man7.org/linux/man-pages/man7/user_namespaces.7.html), which cause the in-container `root` user to be meaningfully different from the Linux `root` user that RouterOS itself runs as. RouterOS does have a `/user` model, but they are not proper Linux users as understood by the kernel, with permissions enforced by Linux user IDs; RouterOS users have _no meaningful existence at all_ inside the container. One practical effect of this is that when you start a container as RouterOS user `fred`, you will not find a `fred` entry in its `/etc/passwd` file, and if you create one at container build time (e.g. with a `RUN useradd` command) it will not be the same `fred` as the RouterOS user on the outside.
Files created by that nerfed `root` user will show up as owned by `root` when using bind-mounted directories on file systems like `ext4` which preserve file ownership. One possible solution for this is:
/disk/format-drive file-system=exfat …
/disk/format-drive file-system=exfat…
It is because of this same limitation that there is no RouterOS equivalent to the `create --user*` or `--group-add` flags.
If your container was designed to have non-root users inside with meaningful distinctions from root, it may require massaging to work on RouterOS. There are no UID maps to convert in-container user IDs to RouterOS user IDs, etc. This is one of the key reasons why it matters that [containers are not VMs][cvm]; persisting in this misunderstanding is liable to lead you to grief under `container.npk`. Let go of your preconceptions and use the RouterOS container runner the way it was meant to be applied: running well-focused single services.(^This philosophy is not specific to RouterOS, nor is it special pleading on its behalf, meant to justify its limitations. [Microservices][msc] are good idea atop _all_ container runtimes.)
[cvm]: /wiki?name=Containers%20Are%20Not%20VMs
[drl]: https://docs.docker.com/engine/security/rootless/
|
︙ | | |
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
|
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
|
-
+
-
+
-
+
|
Until someone breaks this logjam, it’s fair enough to say that RouterOS’s container runner only supports ARM and Intel CPUs.
Incidentally, exploration of the binfmts available to you on your container build host of choice might result in output like `linux/mips64le`, leaving you exulting, “See, there _is_ MIPS support!” But no. First off, this is 64-bit MIPS, while all MIPS CPUs shipped by MikroTik to this date have been 32-bit. Second, it’s [little-endian](https://en.wikipedia.org/wiki/Endianness) (LE) which means it wouldn’t work with the big-endian MIPS CPUs that were more popular historically. Third, even if you find/build a platform that includes support for the MIPSBE, MMIPS, and SMIPS CPU types MikroTik shipped, you’re likely back to lack of a base OS to build from.
## <a id="armv5"></a>…And Only _Most_ ARM at That
There's a special case to be aware of here: the [2024 hEX Refresh](https://mikrotik.com/product/hex_2024) will run `container.npk`, but because the EN7562 SoC it is based on is limited to the ARMv5 instruction set, there are [nearly zero][^v5img] available container images available. For perspective, the Raspberry Pi foundation selected a chip using the ARMv6 architecture for their initial product offering in 2012, then switched to the ARMv7 architecture for the Pi 2 and newer.
There’s a special case to be aware of here: the [2024 hEX Refresh](https://mikrotik.com/product/hex_2024) will run `container.npk`, but because the EN7562 SoC it is based on is limited to the ARMv5 instruction set, there are [nearly zero][^v5img] available container images available. For perspective, the Raspberry Pi foundation selected a chip using the ARMv6 architecture for their initial product offering in 2012, then switched to the ARMv7 architecture for the Pi 2 and newer.
This is _seriously old tech!_
That is not to say it is impossible to build container images for this device, but that you're in much the same situation as for the non-ARM CPU types above.
That is not to say it is impossible to build container images for this device, but that you’re in much the same situation as for the non-ARM CPU types above.
[^v5img]: As of this writing, an [architecture search for "armv5" on Docker Hub](https://hub.docker.com/search?architecture=armv5) returns zero results, but in the past, it returned a small number, approximately five, as I recall. None were for mainstream Linux distros, the type most useful as an image base.
[^v5img]: As of this writing, an [architecture search for “armv5” on Docker Hub](https://hub.docker.com/search?architecture=armv5) returns zero results, but in the past, it returned a small number, approximately five, as I recall. None were for mainstream Linux distros, the type most useful as an image base.
# <a id="auto"></a>Automation
Included in the list of lacks [above](#global) is the [Docker Engine API][DEAPI]. The closest extant feature is the [RouterOS REST API][ROAPI], which can issue commands equivalent to those available at the CLI via `/container`. With this, you can programmatically add, remove, start, and stop containers, plus more.
|
︙ | | |
338
339
340
341
342
343
344
345
346
347
348
349
350
351
|
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
|
+
+
+
+
+
+
+
+
+
+
|
In theory, this should result in zero change since it’s converting to the same output format as the input, but more than once I’ve seen it fix up some detail that RouterOS’s container image loader can’t cope with on its own.
Note, incidentally, that we don’t use Skopeo’s `oci-archive` format specifier. I don’t know why, but I’ve had less success with that.
[Skopeo]: https://github.com/containers/skopeo
# <a id="hw"></a>Hardware Pass-Thru
Prior to RouterOS 7.20, the only support for hardware in containers was indirect. Examples:
* Mounts allow access to USB storage, provided it’s formatted in a way that `/files` can see.
* A third-party NIC might have a driver in the shipping RouterOS kernel, which would let you route traffic to the container via that NIC.
This release added the `/system/hardware` menu. If present, it lists devices you can map in via the `device` option when creating a container, typically generic USB peripherals, and then only when the shipped kernel includes a driver for it. If your router does not have this menu but it’s running RouterOS 7.20beta2 or higher, it means there are no supported devices available for use by this feature.
Notably, this feature does not allow generic PCIe pass-thru of things like GPUs, not even on bare-metal x86_64 installs, since RouterOS does not include GPU drivers.
# <a id="tlc"></a>Top-Level Commands
So ends my coverage of the heavy points. Everything else we can touch on briefly, often by reference to matters covered previously.
For lack of any better organization principle, I’ve chosen to cover the `docker` CLI commands in alphabetical order. Because Podman cloned the Docker CLI, this ordering matches up fairly well with its top-level command structure as well, the primary exception being that I do not currently go into any of Podman’s pure extensions, ones such as its eponymous `pod` command.
|
︙ | | |
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
|
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
|
-
+
-
+
|
## <a id="info" name="inspect"></a>`info`/`inspect`
With the understanding that RouterOS has far fewer configurables than a big-boy container engine, the closest commands in RouterOS are:
* `/container/config/print`
* `/container/print detail where …`
* `/container/print detail where…`
* `:put [:serialize value=[/container/get 0] to=json options=json.pretty]`
That last one was crafted by @Nick on the [MikroTik Discord][MTDisc]. It gives a pretty-printed JSON version of what you get from the second command, which is useful when automating `/container` commands via SSH, as with Ansible. Even so, it's far short of the pages and pages of detail you get from the Docker and Podman CLI equivalents.
That last one was crafted by @Nick on the [MikroTik Discord][MTDisc]. It gives a pretty-printed JSON version of what you get from the second command, which is useful when automating `/container` commands via SSH, as with Ansible. Even so, it’s far short of the pages and pages of detail you get from the Docker and Podman CLI equivalents.
A related limitation is that configurable parameters are often global in RouterOS, set for all containers running on the box, not available to be set on a per-container basis. A good example of this is the memory limit, set via `/container/config/set ram-high=…`.
[MTDisc]: https://discord.gg/exGj6whYw7
## <a id="kill" name="stop"></a>`kill`/`stop`
|
︙ | | |
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
|
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
|
-
+
-
+
-
+
|
## <a id="update"></a>`update`
There is no equivalent short of this:
/container/stop 0
…wait for it to stop…
…wait for it to stop…
/container/remove 0
/container/add …
/container/add…
The last step is the tricky one since `/container/print` shows most but not all of the options you gave to create it. If you didn’t write down how you did that, you’re going to have to work that out to complete the command sequence.
## <a id="version"></a>`version`
While RouterOS’s `container.npk` technically does have an independent version number of its own, it is meant to always match that of the `routeros.npk` package you have installed. RouterOS automatically upgrades both in lock-step, making this the closest equivalent command:
/system/package/print
## <a id="wait"></a>`wait`
The closest equivalent to this would be to call `/container/stop` in a RouterOS script and then poll on `/container/print where …` until it stopped.
The closest equivalent to this would be to call `/container/stop` in a RouterOS script and then poll on `/container/print where…` until it stopped.
# <a id="license"></a>License
This work is © 2024-2025 by Warren Young and is licensed under <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/" target="_blank" rel="license noopener noreferrer">CC BY-NC-SA 4.0<img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/cc.svg?ref=chooser-v1"><img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/by.svg?ref=chooser-v1"><img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/nc.svg?ref=chooser-v1"><img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/sa.svg?ref=chooser-v1"></a>
<div style="height: 50em" id="this-space-intentionally-left-blank"></div>
|