Motivation
The RouterOS container.npk
feature is highly useful, but it is a custom development written in-house by MikroTik, not a copy of Docker Engine or any of the other server-grade container engines.1 Because of the stringent resource constraints on the bulk of MikroTik’s devices, it is exceptionally small, thus unavoidably very thinly featured compared to its big-boy competition. If we can use installed size as a proxy for expected feature set size, we find:
And this is fine! RouterOS serves a particular market, and its developers are working within those constraints. The intent here is to provide a mapping between what people expect of a fully-featured container engine and what you actually get in RouterOS. Where it makes sense, I try to provide workarounds for missing features and guidance to alternative methods where RouterOS’s way merely works differently.
Global Limitations
Allow me to begin with the major limitations visible at a global level in the RouterOS container.npk
feature, both to satisfy the tl;dr crowd and to set broad expectations for the rest of my readers. This super-minimal container implementation lacks:
- orchestration
- rootless mode
- image building
- local image cache
- Docker Engine API
- volume storage manager
- CoW/overlay file system6
- per-container limit controls:7
- FD count
- PID limit
- CPU usage
- storage IOPS
/dev/shm
size limit- terminal/logging bps
- capability restrictions
- seccomp profiles
- rlimit
- hardware pass-thru:
- USB and serial
/dev
node pass-thru is on the wish list, but is not yet implemented.8 - There is no GPU support, not even for bare-metal x86 installs.
- USB and serial
Lack of a management daemon9 is not in that list because a good bit of Docker’s competition also lacks this, on purpose. Between that and the other items on the list, the fairest comparison is not to fully-featured container engines like Docker and Podman but to the container runner at their heart:
One reason container.npk
is far smaller than even the smallest of these runners is that the engines delegate much of what RouterOS lacks to the runner, so that even then it’s an unbalanced comparison. The kill
, ps
, and pause
commands missing from container.npk
are provided in Docker Engine way down at the runc
level, not up at the top-level CLI.
With this grounding, let us dive into the details.
Container Creation
The single biggest area of difference between the likes of Docker and the RouterOS container.npk
feature is how you create containers from OCI images. It combines Docker’s create
and load
commands under /container/add
, the distinction expressed by whether you give it the remote-image
or file
option, respectively.
Given the size of the output from docker create --help
, it should not be surprising that the bulk of that is either not available in RouterOS or exists in a very different form. Most of these limitations stem from the list above. For instance, the lack of any CPU usage limit features means there is no equivalent under /container
for the several docker create --cpu*
options. Rather than go into these options one by one, I’ll cover the ones where the answers cannot be gleaned through a careful reading of the rest of this article:
--env
: The equivalent is this RouterOS command pair:/container/envs/add name=NAME … /container/add envlist=NAME …
This is in fact closer to the way the
--env-file
option works, except that under RouterOS, this particular “file” isn’t stored under/file
!--expose
/--publish
: The VETH you attach the container to makes every listening socket visible by default; theEXPOSE
directive given in yourDockerfile
is completely ignored. Everything the big-boy container engines do related to this is left up to you, the RouterOS administrator, to do manually:- block unwanted services exposed within the container with
/ip/firewall/filter
rules - port-forward wanted services in via
dstnat
rules
- block unwanted services exposed within the container with
--health-cmd
: Because health-checks are often implemented by periodic API calls to verify that the container continues to run properly, the logical equivalent under RouterOS is to script calls to/fetch
, which then issues/container/{stop,start}
calls to remediate any problems it finds.--init
: Although there is no direct equivalent to this in RouterOS, nothing stops you from doing it the old-school way, creating a container that calls “ENTRYPOINT /sbin/init
” or similar, which then starts the subordinate services inside that container. It would be somewhat silly to use systemd for this in a container meant to run on RouterOS in particular; a more suitable alternative would be Alpine’s OpenRC init system, a popular option for managing in-container services.--label
: The closest equivalent is RouterOS’scomment
facility, which you can apply to a running container with “/container/set 0 comment=MYLABEL
”.--mac-address
: If RouterOS had this, I would expect it to be offered as “/interface/veth/set mac-address=…
”, but that does not currently exist. As it stands, a VETH interface’s MAC address is random, same as the default behavior of Docker.--mount
: The closest equivalent to this in RouterOS is quite different, being the/container/mounts/add
mechanism. The fact that you create this ahead of instantiating the container might make you guess this to be a nearer match to a “docker volume create …
” command, but alas, there is no container volume storage manager. In Docker-speak, RouterOS offers bind-mounts only, not separately-managed named volumes that only containers can see.Atop this,
container.npk
can bind-mount whole directories only, not single files as Docker and Podman allow. This can be a particular problem when trying to inject a single file under/etc
since it tends to require that you copy in all of the “peer” files in that same subdirectory hierarchy merely to override one of them.--network
: This one is tricky. While there is certainly nothing like “/container/add network=…
”, it’s fair to say the equivalent is, “RouterOS.” You are, after all, running this container atop a highly featureful network operating system. Bare-bones thecontainer.npk
runtime may be, but any limitations you run into with the network it attaches to are more a reflection of your imagination and skill than to lack of command options under/container
.--pid/uts
: The RouterOS container runner must use Linux namespaces under the hood, but it does not offer you control over which PID, file, network, user, etc. namespaces each container uses. See also this.--read-only
: RouterOS offers precious little in terms of file system permission adjustment. As a rule, it is best to either shell into the container and adjust permissions there or rebuild the container with the permissions you want from go. Any expectations based on being able to adjust any of this between image download time and container creation time are likely to founder.--restart
: The closest RouterOS gets to this is itsstart-on-boot
setting, meaning you’d have to reboot the router to get the container to restart. If you want automatic restarts, you will have to script it.--rm
: No direct equivalent, and until we get arun
command and an image cache, it's difficult to justify adding it.13--volume
: This is largely covered under--mount
above, but it’s worth repeating thatcontainer.npk
has no concept of what Docker calls “volumes;” it only has bind-mounts. In that sense, RouterOS does not blur lines as Docker and Podman attempt to do in their handling of the--volume
option.
That brings us to the related matter of…
There Is No “Run”
RouterOS offers no shorthand command akin to docker run
for creating and starting a container in a single step. Moreover, the lack of Linux-like interactive terminal handling — covered below — means a simple command like…
$ docker run --rm -it alpine:latest
…followed by…
sh-5.1# <do something inside the container>
sh-5.1# exit
…may end up expressed under RouterOS as…
> /container
> add remote-image=alpine:latest veth=veth1 entrypoint=sleep cmd=3600
> print
… nope, still downloading, wait …
> print
… nope, still extracting, wait longer …
> print
… oh, good, got the container ID …
> start 0
… wait for it to launch …
> shell 0
sh-5.1# <do something inside the container>
sh-5.1# exit
> stop 0
> remove 0
Whew! 😅
I resorted to that “sleep 3600” hack in order to work around the lack of interactive mode in container.npk
, without which containers of this type will start, do a whole lot of nothing, and then stop. I had to give it some type of busy-work to keep it alive long enough to let me shell in and do my actual work. This sneaky scam is a common one for accomplishing that end, but it has the downside of requiring you to predict how long you want the container to run before stopping; this version only lasts an hour.
If you are imagining more complicated methods for keeping containers running in the background when they were designed to run interactively, you are next liable to fall into the trap that…
There Is No Host-Side Command Line Parser
The RouterOS CLI isn’t a Bourne shell, and the container feature treats the optional entrypoint
and cmd
values as simple strings, without any of the parsing you get for free when typing docker
commands into a Linux command shell. The net effect of all this is that with many containers, you’re limited to two-word commands, one in entrypoint
and the other in cmd
, as in the above “sleep 3600
” hack.
But how then do you say something akin to the following under RouterOS?
docker run -it alpine:latest ls -lR /etc
You might want to do that in debugging to find out what a given config file is called and exactly where it is in the hierarchy so that you can target it with a mount=…
override. If you try to pass it all as…
/container/add … entrypoint="ls -lR /etc"
…the kernel will complain that there is no command in the container’s PATH
called “ls -lR /etc
”.
You may then try to split it as…
/container/add … entrypoint="ls" cmd="-lR /etc"
…but that will earn you a refusal by /bin/ls
to accept “ ” (space) as an option following the R
!
If you get cute and try to “cuddle” the options with the arguments as…
/container/add … entrypoint="ls" cmd="-lR/etc"
…the /bin/ls
implementation will certainly attempt to treat /
as an option and die with an error message.14
Things aren’t always this grim. For instance, you can run my iperf3
container as a client instead of its default server mode by saying something like:
/container/add … cmd="-c192.168.88.99"
This relies on the fact that the iperf3
command parser knows how to break the host name part out from the -c
option itself, something not all command parsers are smart enough to do. There’s 50 years of Unix and Linux history encouraging programs to rely on the shell to do a lot of work before the program’s main()
function is even called. The command line processing that container.npk
applies to its cmd
argument lacks all that power. If you want Bourne shell parsing of your command line, you have to set it via ENTRYPOINT
or CMD
in the Dockerfile
, then rebuild the image.
There is one big exception to all this: a common pattern is to have the ENTRYPOINT
to a container be a shell script and for that to do something like this at the end:
/path/to/actual/app $@
This ropes the /bin/sh
inside the container into the process, and depending on exactly how it’s done, it might be able to split a single passed command argument string into multiple arguments to the internal program. The main problem with this is that it’s entirely contingent on how the container image is set up. The only way to profit from this realization other than by happenstance is if you’re creating the image yourself and can arrange for it to run your passed argument string through an interpretation process akin to the one shown above. That amounts to a type of “command injection” vulnerability, but as long as you’re certain your commands are coming from trusted sources, it might be a risk you’re willing to accept.
Interactive Terminal Handling
Although RouterOS proper is built atop Linux, and it provides a feature-rich CLI, it is nothing like a Linux command shell. I am not speaking of skin-level command syntax differences here; the differences go far deeper.
When you SSH into a RouterOS box, you’re missing out on a meaningful distinction between stdout and stderr, and the kernel’s underlying termios/pty subsystem is hidden from you. These lacks translate directly into limitations in the ability of container.npk
to mimic the experience of using Docker at the command line.
One of the core RouterOS design principles is being able to run headlessly for long periods, with the administrator connecting to their virtual terminal via WinBox, WebFig, or SSH briefly, only long enough to accomplish some network admin task before logging back out. The RouterOS CLI never was meant to provide the sort of rich terminal experience you need when you work in a Linux terminal all day, every day.
The thing is, Docker was designed around this sensibility.
It is for this inherent reason that container.npk
cannot provide equivalents of Docker’s attach
command, nor its “docker run --attach
” flag, nor the common “docker run -it
” option pair. The closest it comes to all this is its shell
command implementation, which can connect your local terminal to a true remote Linux terminal subsystem. Alas, that isn’t a close “run -it
” alternative because you’re left typing commands at this remote shell, not at the container’s ENTRYPOINT
process. Even then, it doesn’t always work since a good many containers lack a /bin/sh
program inside the container in the first place, on purpose, typically to reduce the container’s attack surface.15
Log Handling
Although Docker logging is tied into this same Linux terminal I/O design, we cannot blame the lack of an equivalent to “docker logs
” on the RouterOS design principles in the same manner as above. The cause here is different, stemming first from the fact that RouterOS boxes try to keep logging to a minimum by default, whereas Docker logs everything the container says, without restriction. RouterOS takes the surprising default of logging to volatile RAM in order to avoid burning out the flash. Additionally, it ignores all messages issued under “topics” other than the four preconfigured by default, which does not include the “container” topic you get access to by installing container.npk
.
To prevent your containers’ log messages from being sent straight to the bit bucket, you must say:
/container/{add,set} … logging=yes
/system/logging add topics=container action=…
Having done so, we have a new limitation to contend with: RouterOS logging isn’t as powerful as the Docker “logs
” command, which by default works as if you asked it, “Tell me what this particular container logged since the last time I asked.” RouterOS logging, on the other hand, mixes everything together in real time, requiring you to dig through the history manually.
(The same is true of podman logs
, except that it ties into systemd’s unified “journal” subsystem, a controversial design choice that ended up paying off handsomely when Podman came along and wanted to pull up per-container logs to match the way Docker behaved.)
There Is No Local Image Cache
I stated this in the list above, but what does that mean in practice? What do we lose as a result?
A surprising number of knock-on effects result from this lack:
Registries with pull-rate limiting are more likely to refuse you during experimentation as you repeatedly reinstantiate a container trying to get it to work. This can be infuriating when it happens in the middle of a hot-and-heavy debugging session.
The pricing changes made to Docker Hub in late 2024 play into this. They're now imposing a limit of 200 pulls per user per 6 hours for users on the free tier, where before they had an unlimited-within-reason policy for public repos. You can give RouterOS a Docker Hub user login name and a CLI token ("
password
") to work around that, saving you from the need to compete with all the other anonymous users pulling that image, including random bots on the Internet.The thing is, if RouterOS had an image cache, you would only have to pull the image once as long as you keep using the same remote image URL, as when trying out different settings. That would let you side-step the whole mess.
If the container provides DNS, you may end up in a chicken-and-egg situation where the old container is down but now the router can't pull from the remote registry (e.g. Docker Hub) because it can no longer resolve
registry-1.docker.io
. An image cache solves this problem by allowing the runtime to pull the new image while the prior one still runs, then do the swap with both versions of the image in the cache. It even allows clever behavior like health checks to gate whether to continue with the swap or trigger a rollback.Equivalents for several of the "missing" commands listed below cannot be added to
container.npk
without adding an image cache first:commit
,diff
,pull
, etc.16
A broad workaround for some of the above is having the foresight to pull the image using Docker or Podman, then save the image out as a tarball and using /container/add file=
instead of remote-image
. There are landmines along this path owing to the OCI compatibility issue covered separately below.
Everything Is Rootful
This shows up in a number of guises, but the overall effect is that all containers run as a nerfed root
user under container.npk
, same as Docker did from the start. This remains the Docker default, but starting with the 20.10 release, it finally got a rootless mode to compete with Podman’s rootless-by-default nature. I bring up this history to show that RouterOS is not unconditionally “wrong” to operate as it does, merely limited.
This design choice may be made reasonably safe through the grace of user namespaces, which cause the in-container root
user to be meaningfully different from the Linux root
user that RouterOS itself runs as. RouterOS does have a /user
model, but they are not proper Linux users as understood by the kernel, with permissions enforced by Linux user IDs; RouterOS users have no meaningful existence at all inside the container. One practical effect of this is that when you start a container as RouterOS user fred
, you will not find a fred
entry in its /etc/passwd
file, and if you create one at container build time (e.g. with a RUN useradd
command) it will not be the same fred
as the RouterOS user on the outside.
Files created by that nerfed root
user will show up as owned by root
when using bind-mounted directories on file systems like ext4
which preserve file ownership. One possible solution for this is:
/disk/format-drive file-system=exfat …
It is because of this same limitation that there is no RouterOS equivalent to the create --user*
or --group-add
flags.
If your container was designed to have non-root users inside with meaningful distinctions from root, it may require massaging to work on RouterOS. There are no UID maps to convert in-container user IDs to RouterOS user IDs, etc. This is one of the key reasons why it matters that containers are not VMs; persisting in this misunderstanding is liable to lead you to grief under container.npk
. Let go of your preconceptions and use the RouterOS container runner the way it was meant to be applied: running well-focused single services.17
CPU Limitations
This limitation comes in two subclasses:
There Is No Built-In CPU Emulation
Docker and Podman allow you run an image built for another architecture on your local system through transparent CPU emulation. If you are on an x86_64 host, try this command:
$ docker run --rm -it --platform linux/arm/v7 alpine:latest uname -m
That should yield “armv7l
”, an entirely different CPU architecture from your host. Even if you try this on an ARM64 host (e.g. an Apple Silicon macOS box) you still need transparent CPU emulation to cope with the different machine word size.
For that to work under container.npk
, the RouterOS developers would have to do the same thing Docker and Podman do: ship the QEMU and Linux kernel binfmt_misc
bridges needed to get the OS to accept these “foreign” binaries. Since it would approximately double the size of RouterOS to do this for all the popular CPU architectures, they naturally chose not to do this.
What this means in practice is that you have to be sure the images you want to use were built for the CPU type in your RouterOS device.
There is a path around this obstacle: ship your own CPU emulation, as was done in this forum thread, which describes a container that bundles the 32-bit Intel-compiled netinstall-cli
Linux binary along with an ARM build of of qemu-i386
so that it will run on ARM RouterOS boxes. For a process that isn’t CPU-bound — and NetInstall is very much I/O-bound — this can be a reasonable solution as long as you’re willing to pay the ~4 megs the emulator takes up.
Intel and ARM Only
If you run the binfmt test image under your container build system of choice,18 it is likely to list several CPU types besides Intel and ARM, but that only tells you which platforms you can build an image for, not which platforms your runner — container.npk
in this case — will accept. The prior point about lack of CPU emulation means you must find exact matches in this list for the CPU type in your chosen RouterOS device.
MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, most recently as of this writing in their CRS518-16XS-2XQ-RM. Atop that, there are other CPU architectures in the historical mix like PowerPC and TILE. MikroTik doesn’t ship a container.npk
for any of these platforms.
But why not?
To bring up each new build target, the creators of your container build toolchain of choice must bring together:
- a QEMU emulator for the target system
- a sufficiently complete Linux distro ported to that target
- the
binfmt_misc
kernel modules that tie these two together
QEMU is “easy” in the sense that the hard work has already been done; there are QEMU emulators for every CPU type MikroTik ever shipped. (Details) There’s a partial exception with TILE, which once existed in QEMU core but has been removed for years, following the removal of TILE support from the Linux kernel. The thing is, TILE hasn’t progressed in the meantime, so bringing up a QEMU TILE emulator should be a matter of digging that old code back out of source control, then putting in the work to port it to a decade-newer version of Linux.
The binfmt piece is also easy enough.
That leaves the Linux distros for the target platforms, used as container base images. That’s the true sticking point.
One of the most powerful ideas in the OCI container ecosphere is that you don’t cross-compile programs, you boot an existing Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries, which the binfmt_misc
piece then turns back around and runs under QEMU again. The hard work goes into producing the OS image, after which it’s less work overall this way.
The trick is finding that base OS image in the first place.
For instance, you might have an existing Dockerfile
that says FROM ubuntu:latest
at the top and are wanting to run it on a PPC router. While Ubuntu doesn’t ship any PPC OS images, there have been efforts to port Ubuntu to PPC Macs, and one of those third-party distros might serve as an OCI container build base.
But then, having done so, you’re in a fresh jam when the next container you want to build says “FROM
” something else; ubi9
, for instance. It’s doubtful you will find a “RHEL for PPC Macs” type of OS distro, leading you to a second-best option of porting the container from RHEL to the Mac Ubuntu image base you already have.
When you next come across one of the huge number of containers based on Alpine, you’ll be back in the soup once again. While its CPU support list is broader than the one for Ubuntu, there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?
Then there’s Debian, another popular OCI image base, one that’s been ported to a lot of strange platforms, but chances are that it was someone’s wild project, now abandoned. It’s likely the APT package repo isn’t working any more, for one, because who wants to host a huge set of packages for a dead project?
In brief, the reason MikroTik doesn’t ship container.npk
for 32-bit PPC, 32-bit MIPS, and TILE is that there are few Linux distro images in OCI format to use as base images, and it isn’t greatly in their interest to pull that together along with the QEMU and binfmt_misc
pieces for you, nor is it in the financial interest of Docker, Podman, etc.
There’s nothing stopping anyone reading this that has the skill and motivation to do this from doing so, but you’ll have to prove out your containers under emulation. Not until then do I see MikroTik being forced to take notice and provide a build of container.npk
for that platform. It’s not quite a classic chicken-and-egg situation, but I can’t ignore the hiss of radio silence I got in response to this challenge on the forum.
Until someone breaks this logjam, it’s fair enough to say that RouterOS’s container runner only supports ARM and Intel CPUs.
Incidentally, exploration of the binfmts available to you on your container build host of choice might result in output like linux/mips64le
, leaving you exulting, “See, there is MIPS support!” But no. First off, this is 64-bit MIPS, while all MIPS CPUs shipped by MikroTik to this date have been 32-bit. Second, it’s little-endian (LE) which means it wouldn’t work with the big-endian MIPS CPUs that were more popular historically. Third, even if you find/build a platform that includes support for the MIPSBE, MMIPS, and SMIPS CPU types MikroTik shipped, you’re likely back to lack of a base OS to build from.
Automation
Included in the list of lacks above is the Docker Engine API. The closest extant feature is the RouterOS REST API, which can issue commands equivalent to those available at the CLI via /container
. With this, you can programmatically add, remove, start, and stop containers, plus more.
What RouterOS does not offer is a way for common control plane software like Docker Desktop or Portainer to manage the containers running on your routers. This is because these programs were written with the assumption that everyone’s running Docker or Podman underneath, and as long as they stick to a compatible subset of the Docker Engine API, implementation details cease to matter up at these programs’ level of abstraction.
If you find yourself needing a control plane for your routers’ containers, you will likely need to write it yourself. Third-party ones are unlikely to be compatible out of the box.
OCI Compliance
RouterOS benefits greatly from the existence of the Open Container Initiative specification process. It gives them a vendor-independent standard to target, rather than continually chasing Docker’s current implementation in an ad hoc fashion.
Unfortunately, container.npk
is not 100% OCI-compliant.
You are likely to run into this class of problems early on when dealing with OCI image tarballs under RouterOS because it cannot currently handle multi-platform image tarballs at all. If you then build a single-platform image or pull one by giving the --platform
flag, it may still fail to load, depending on how it was built. I’ve found the Skopeo tool from the Podman project helpful in fixing this type of problem up:
$ skopeo copy docker-archive:broken.tar docker-archive:working.tar
In theory, this should result in zero change since it’s converting to the same output format as the input, but more than once I’ve seen it fix up some detail that RouterOS’s container image loader can’t cope with on its own.
Note, incidentally, that we don’t use Skopeo’s oci-archive
format specifier. I don’t know why, but I’ve had less success with that.
Top-Level Commands
So ends my coverage of the heavy points. Everything else we can touch on briefly, often by reference to matters covered previously.
For lack of any better organization principle, I’ve chosen to cover the docker
CLI commands in alphabetical order. Because Podman cloned the Docker CLI, this ordering matches up fairly well with its top-level command structure as well, the primary exception being that I do not currently go into any of Podman’s pure extensions, ones such as its eponymous pod
command.
build
/buildx
RouterOS provides a bare-bones container runtime only, not any of the image building toolchain.
commit
Given the global limitations, it should be no surprise that RouterOS has no way to commit changes made to the current image layer to a new layer.
compose
RouterOS completely lacks multi-container orchestration features, including lightweight single-box ones like Compose or Kind virtual clusters.
create
/load
cp
RouterOS does let you mount a volume inside a container, then use the regular /file
facility to copy files in under that volume’s mount point, but this is not at all the same thing as the “docker cp
” command. There is no way to overwrite in-container files with external data short of rebuilding the container or using in-container mechanisms like /bin/sh
to do the copying for you.
If you come from a Docker or Podman background, their local overlay image stores might lead you into thinking you could drill down into the GUID-named “container store” directories visible under /file
and perform ad hoc administration operations like overwriting existing config files inside the container, but alas, the RouterOS CLI will not let you do that.
diff
With neither a local image cache nor a CoW file system to provide the baseline, there can be no equivalent command.
events
RouterOS doesn’t support container events.
exec
There is no way in RouterOS to execute a command inside a running container short of /container/shell
, which of course only works if there is a /bin/sh
inside the container.
export
/save
There is no way to produce a tarball of a running container’s filesystem or to save its state back to an OCI image tarball.
The documented advice for getting such a tarball is to do this on the PC side via docker
commands, then upload the tarball from the PC to the RouterOS device.
history
RouterOS doesn’t keep this information.
image
/images
The lack of a build toolchain means there is no sensible equivalent for the “docker image build
” subcommand.
The bulk of the remaining missing subcommands are explained by the lack of a local image cache:
history
import
/load
/save
ls
prune
rm
/rmi
tag
tree
The few remaining subcommands are implicitly covered elsewhere: inspect
and push/pull
.
import
This is /container/add file=oci-image.tar
in RouterOS.
info
/inspect
With the understanding that RouterOS has far fewer configurables than a big-boy container engine, the closest commands in RouterOS are:
/container/config/print
/container/print detail where …
:put [:serialize value=[/container/get 0] to=json options=json.pretty]
That last one was crafted by @Nick on the MikroTik Discord. It gives a pretty-printed JSON version of what you get from the second command, which is useful when automating /container
commands via SSH, as with Ansible. Even so, it's far short of the pages and pages of detail you get from the Docker and Podman CLI equivalents.
A related limitation is that configurable parameters are often global in RouterOS, set for all containers running on the box, not available to be set on a per-container basis. A good example of this is the memory limit, set via /container/config/set ram-high=…
.
kill
/stop
RouterOS doesn’t make a distinction between “kill” and “stop”. The /container/stop
command behaves more like docker kill
or docker stop -t0
in that it doesn’t try to bring the container down gracefully before giving up and killing it.
login
/logout
RouterOS only allows you to configure a single image registry, including the login parameters:
/container/config/set registry-url=… username=… password=…
The only way to “log out” is to overwrite the username and password via:
/container/config/set username="" password=""
logs
pause
/unpause
No such feature in RouterOS; a container is running or not.
If the container has a shell, you could try a command sequence like this to get the pause effect:
> /container/shell 0
$ pkill -STOP 'name of entrypoint'
If that worked, sending a CONT
signal will unpause the process.
port
RouterOS exposes all ports defined for a container in the EXPOSE
directive in the Dockerfile
. The only ways to instantiate a container with fewer exposed ports are to either rebuild it with a different EXPOSE
value or to create a derived container with the FROM
directive and set a new EXPOSE
value.
(See also the discussion of --publish
above.)
run
ps
/stats
/top
The closest thing in RouterOS is the /container/print follow*
commands.
A more direct alternative would be to shell into the container and run whatever it has for a top
command, but of course that is contingent on any of that being available.
push
/pull
RouterOS maintains no local image cache, thus cannot push or pull images.
While it can pull from an OCI image repo, it does so as part of /container/add
, which is closer to a docker create
command than to docker pull
.
There is no equivalent at all to docker push
.
rename
RouterOS doesn’t let you set the name on creation, much less rename it later. The closest you can come to this is to add a custom comment
, which you can both set at “add
” time and after creation.
restart
This shortcut for stop
followed by start
doesn’t exist.
It often ends up being more complex than that because the stop
operation is asynchronous. There are no flags to make it block until the container does stop, nor a way to set a timeout on it, after which it kills the container outright, as you get with the big-boy engines. You are likely to need a polling loop to wait until the running container’s state transitions to “stopped” before calling /container/start
on it.
See also --restart
above.
rm
RouterOS spells this /container/remove
, but do be aware, there is no equivalent for docker rm -f
to force the removal of a running container. RouterOS makes you stop it first.
search
There is no equivalent to this in RouterOS. You will need to connect to your image registry of choice and use its search engine.
secret
This typically shows up as part of Docker Swarm, Kubernetes, or Podman pods, none of which exists under RouterOS, which is why it shouldn’t surprise you that RouterOS has no secret-sharing facility. The standard fallbacks for this are passed-in environment variables or bind-mounted volumes.
start
RouterOS has /container/start
, with limitations you can reasonably infer from the rest of this article.
swarm
Extending from the lack of single-box container orchestration features, RouterOS also completely lacks a cluster orchestration feature, not even a lightweight one like Docker Swarm or k3s, and it certainly doesn’t support the behemoth that is Kubernetes.
tag
RouterOS does nothing more with tags than to select which image to download from a registry. Without a local image cache, you cannot re-tag an image.
update
There is no equivalent short of this:
/container/stop 0
…wait for it to stop…
/container/remove 0
/container/add …
The last step is the tricky one since /container/print
shows most but not all of the options you gave to create it. If you didn’t write down how you did that, you’re going to have to work that out to complete the command sequence.
version
While RouterOS’s container.npk
technically does have an independent version number of its own, it is meant to always match that of the routeros.npk
package you have installed. RouterOS automatically upgrades both in lock-step, making this the closest equivalent command:
/system/package/print
wait
The closest equivalent to this would be to call /container/stop
in a RouterOS script and then poll on /container/print where …
until it stopped.
License
This work is © 2024-2025 by Warren Young and is licensed under CC BY-NC-SA 4.0
- ^ Podman, LXC/LXD, etc.
- ^
Version 27.1.1, according to
dnf remove docker-ce…
after installing these packages per the instructions. Note also that this is the “engine” alone, leaving out the extra gigabyte of stuff that makes up Docker Desktop. This is what you’d run on a remote server, the closest situation to what a headless RouterOS box provides. - ^
This is essentially Docker Engine minus the build tooling. The size is for version 2.0.0-rc1 of
nerdctl
plus thecontainerd
from the Docker Engine CE install above, according tosudo dnf remove containerd
anddu -sh nerdctl
. - ^
Version 4.9.4 on EL9, according to
sudo dnf remove podman conmon crun
. - ^
Version 7.15.2, according to
/system/package/print
. - ^ This is not a verified fact, but an inference based on the observation that if RouterOS did have this facility underlying its containers, several other limitations covered here would not exist.
- ^ The only configurable resource limit is on maximum RAM usage, and it’s global, not settable on a per-container basis.
- ^
RouterOS itself may see the USB device and let your container use it indirectly, as with storage media, which you can bind-mount into the container with “
/container/add mounts=…
”. - ^
containerd
in modern setups,dockerd
in old ones - ^
This is the runner underpinning
containerd
, thus also Docker, although it precedes it. Long before they createdcontainerd
, it underpinneddockerd
instead. Because it is so primordial, a good many other container engines are also based on it. - ^
This is the bare-bones OCI image runner built into systemd, with a feature set fairly close to that of
container.npk
. The size above is for version 252 of this program’s parentsystemd-container
package as shipped on EL9. - ^
This is Podman’s alternative to
runc
, written in C to make it smaller. Early versions of Podman once relied onrunc
, and it can still be configured to use it, but the new default is to use the slimmer but feature-equivalentcrun
. - ^
There is a manual
/container/remove
command, but it does something rather different. - ^
Yes, for certain. I tested the GNU, BSD, and BusyBox implementations of
ls
, and they all do this. - ^ Indeed, all of my public containers elide the shell for this reason.
- ^
To be fair, a number of these commands only need to exist in the big-boy engines because of the image cache:
rmi
,prune
, etc. - ^ This philosophy is not specific to RouterOS, nor is it special pleading on its behalf, meant to justify its limitations. Microservices are good idea atop all container runtimes.
- ^
Simplest method:
docker run --privileged --rm tonistiigi/binfmt