Many hyperlinks are disabled.
Use anonymous login
to enable hyperlinks.
Artifact ID: | c83a38d67756b0c06fb6f45e02c04806916e2a7fed9ababa89264055fcbbb605 |
---|---|
Page Name: | Container Limitations |
Date: | 2024-08-05 13:25:26 |
Original User: | tangent |
Mimetype: | text/x-markdown |
Parent: | aa6c314e4ddcccb28a19b122e4c8863efaa5eacc2e8cb4f1c537a4f46e85f131 (diff) |
Next | 06623bf2a7b198605a0fe083d6cfe310901e0831981f51c3672f84109ac09ff8 |
Motivation
The RouterOS container.npk
feature is highly useful, but it is a custom development written in-house by MikroTik, not a copy of Docker Engine or any of the other server-grade container engines.1 Because of the stringent resource constraints on the bulk of MikroTik’s devices, it is exceptionally small, thus unavoidably very thinly featured compared to its big-boy competition. If we can use installed size as a proxy for expected feature set size, we find:
And this is fine! RouterOS serves a particular market, and its developers are working within those constraints. The intent here is to provide a mapping between what people expect of a fully-featured container engine and what you actually get in RouterOS. Where it makes sense, I try to provide workarounds for missing features and guidance to alternative methods where RouterOS’s way merely works differently.
Global Limitations
Allow me to begin with the major limitations visible at a global level in the RouterOS container.npk
feature, both to satisfy the tl;dr crowd and to set broad expectations for the rest of my readers. This super-minimal container implementation lacks:
- orchestration
- image building
- a local image cache
- JSON and REST APIs
- a CoW/overlay file system6
- per-container limit controls:7
- FD count
- PID limit
- CPU usage
- storage IOPS
/dev/shm
size limit- terminal/logging bps
- capability restrictions
- seccomp profiles
- rlimit
- hardware pass-thru:
- USB device entries under
/dev
are on the wish list, but not currently available.8 - There is no GPU support, not even for bare-metal x86 installs.
- USB device entries under
Lack of a management daemon9 is not in that list because a good bit of Docker’s competition also lacks this, on purpose. Between that and the other items on the list, the fairest comparison is not to fully-featured container engines like Docker and Podman but to the container runner at their heart:
One reason container.npk
is far smaller than even the smallest of these runners is that the engines delegate much of what RouterOS lacks to the runner, so that even then it’s an unbalanced comparison. The kill
, ps
, and pause
commands missing from container.npk
are provided in Docker Engine way down at the runc
level, not up at the top-level CLI.
With this grounding, let us dive into the details.
Container Creation
The single biggest area of difference between the likes of Docker and the RouterOS container.npk
feature is how you create containers from OCI images. It combines Docker’s create
and load
commands under /container/add
, the distinction expressed by whether you give it the remote-image
or file
option, respectively.
Given the size of the output from docker create --help
, it should not be surprising that the bulk of that is either not available in RouterOS or exists in a very different form. Most of these limitations stem from the list above. For instance, the lack of any CPU usage limit features means there is no equivalent under /container
for the several docker create --cpu*
options. Rather than go into these options one by one, I’ll cover the ones where the answers cannot be gleaned through a careful reading of the rest of this article:
--env
: The equivalent is this RouterOS command pair:/container/envs/add name=NAME … /container/add envlist=NAME …
This is in fact closer to the way the
--env-file
option works, except that under RouterOS, this particular “file” isn’t stored under/file
!--expose
/--publish
: The VETH you attach the container to makes every listening socket visible by default. It is left up to you to manually block off anything exposed against your wishes by use of/ip/firewall/filter
commands.--health-cmd
: Because health-checks are often implemented by periodic API calls to verify that the container continues to run properly, the logical equivalent under RouterOS is to script calls to/fetch
, which then issues/container/{stop,start}
calls to remediate any problems it finds.--init
: Although there is no direct equivalent to this in RouterOS, nothing stops you from doing it the old-school way, creating a container that calls “ENTRYPOINT /sbin/init
” or similar, which then starts the subordinate services inside that container. It would be somewhat silly to use systemd for this in a container meant to run on RouterOS in particular; a more suitable alternative would be Alpine’s OpenRC init system, a popular option for managing in-container services.--label
: The closest equivalent is RouterOS’scomment
facility, which you can apply to a running container with “/container/set 0 comment=MYLABEL
”.--mac-address
: If RouterOS had this, I would expect it to be offered as “/interface/veth/set mac-address=…
”, but that does not currently exist. As it stands, a VETH interface’s MAC address is random, same as the default behavior of Docker.--network
: This one is tricky. While there is certainly nothing like “/container/add network=…
”, it’s fair to say the equivalent is, “RouterOS.” You are, after all, running this container atop a highly featureful network operating system. Bare-bones thecontainer.npk
runtime may be, but any limitations you run into with the network it attaches to are more a reflection of your imagination and skill than to lack of command options under/container
.--pid/userns/uts
: The RouterOS container runner must use Linux namespaces under the hood, but it does not offer you control over which PID, file, network, etc. namespaces each container uses.--read-only
: RouterOS offers precious little in terms of file system permission adjustment. As a rule, it is best to either shell into the container and adjust permissions there or rebuild the container with the permissions you want from go. Any expectations based on being able to adjust any of this between image download time and container creation time are likely to founder.--restart
: The closest RouterOS gets to this is itsstart-on-boot
setting, meaning you’d have to reboot the router to get the container to restart. If you want automatic restarts, you will have to script it.--rm
: No direct equivalent. There is a manual/container/remove
command, but nothing like this option, which causes the container runtime to automatically remove the instantiated container after it exits. It’s just as well since this option is most often used when running ad hoc containers made from a previously downloaded image; RouterOS’s lack of an image cache means you have to go out of your way to export a tarball of the image and upload it to the router, then use “/container/add file=…
” if you want to avoid re-downloading the image from the repository on each relaunch.
That brings us to the related matter of…
There Is No “Run”
RouterOS offers no shorthand command akin to docker run
for creating and starting a container in a single step. Moreover, the lack of Linux-like interactive terminal handling — covered below — means a simple command like…
$ docker run --rm -it alpine:latest
…followed by…
sh-5.1# <do something inside the container>
sh-5.1# exit
…may end up expressed under RouterOS as…
> /container
> add remote-image=alpine:latest veth=veth1 entrypoint=sleep cmd=3600
> print
… nope, still downloading, wait …
> print
… nope, still extracting, wait longer …
> print
… oh, good, got the container ID …
> start 0
… wait for it to launch …
> shell 0
sh-5.1# <do something inside the container>
sh-5.1# exit
> stop 0
> remove 0
Whew! 😅
I resorted to that “sleep 3600” hack in order to work around the lack of interactive mode in container.npk
, without which containers of this type will start, do a whole lot of nothing, and then stop. I had to give it some type of busy-work to keep it alive long enough to let me shell in and do my actual work. This sneaky scam is a common one for accomplishing that end, but it has the downside of requiring you to predict how long you want the container to run before stopping; this version only lasts an hour.
If you are imaging more complicated methods for keeping containers running in the background when they were designed to run interactively, you are next liable to fall into the trap that…
There Is No Host-Side Command Line Parser
The RouterOS CLI isn’t a Bourne shell, and the container feature’s entrypoint
and cmd
option parsers treats them as simple strings, without any of the parsing you get for free when typing docker
commands into a Linux command shell. The net effect of all this is that you’re limited to two-word commands, one in entrypoint
and the other in cmd
, as in the above “sleep 3600
” hack.
But how then do you say something akin to the following under RouterOS?
docker run -it alpine:latest ls -lR /etc
You might want to do that in debugging to find out what a given config file is called and exactly where it is in the hierarchy so that you can target it with a mount=…
override. If you try to pass it all as…
/container/add … entrypoint="ls -lR /etc"
…the kernel will complain that there is no command in the container’s PATH
called “ls -lR /etc
”.
You may then try to split it as…
/container/add … entrypoint="ls" cmd="-lR /etc"
…but that will earn you error message from /bin/ls
complaining that it refuses to accept “ ” (space) as an option following the R
!
If you get cute and try to “cuddle” the options with the arguments as…
/container/add … entrypoint="ls" cmd="-lR/etc"
…the /bin/ls
implementation will certainly attempt to treat /
as an option and die with an error message.13
Things aren’t always this grim. For instance, you can run my iperf3
container as a client instead of its default server mode by saying something like:
/container/add … cmd="-c192.168.88.99"
This relies on the fact that the iperf3
command parser knows how to break the host name part out from the -c
option itself, something not all command parsers are smart enough to do. There’s 50 years of Unix and Linux history encouraging programs to rely on the shell to do a lot of work before the program’s main()
function is even called. The command line processing that container.npk
applies to its cmd
argument lacks all that power. If you want Bourne shell parsing of your command line, you have to set it via ENTRYPOINT
or CMD
in the Dockerfile
, then rebuild the image.
Terminal Handling
Although RouterOS proper is built atop Linux, and it provides a feature-rich CLI, it is nothing like a Linux command shell. I am not speaking of skin-level command syntax differences here; the differences go far deeper.
When you SSH into a RouterOS box, you’re missing out on a meaningful distinction between stdout and stderr, and the kernel’s underlying termios/pty subsystem is hidden from you. These lacks translate directly into limitations in the ability of container.npk
to mimic the experience of using Docker at the command line.
One of the core RouterOS design principles is being able to run headlessly for long periods, with the administrator connecting to their virtual terminal via WinBox, WebFig, or SSH briefly, only long enough to accomplish some network admin task before logging back out. The RouterOS CLI never was meant to provide the sort of rich terminal experience you need when you work in a Linux terminal all day, every day.
The thing is, Docker was designed around this sensibility.
It is for this inherent reason that container.npk
cannot provide equivalents of Docker’s attach
command, nor its “docker run --attach
” flag, nor the common “docker run -it
” option pair. The closest it comes to all this is its shell
command implementation, which can connect your local terminal to a true remote Linux terminal subsystem. Alas, that isn’t a close “run -it
” alternative because you’re left typing commands at this remote shell, not at the container’s ENTRYPOINT
process. Even then, it doesn’t always work since a good many containers lack a /bin/sh
program inside the container in the first place, on purpose, typically to reduce the container’s attack surface.14
Log Handling
Although Docker logging is tied into this same Linux terminal I/O design, we cannot blame the lack of an equivalent to “docker logs
” on the RouterOS design principles in the same manner as above. The cause here is different, stemming first from the fact that RouterOS boxes try to keep logging to a minimum by default, whereas Docker logs everything the container says, without restriction. RouterOS takes the surprising default of logging to volatile RAM in order to avoid burning out the flash. Additionally, it ignores all messages issued under “topics” other than the four preconfigured by default, which does not include the “container” topic you get access to by installing container.npk
.
To prevent your containers’ log messages from being sent straight to the bit bucket, you must say:
/container/{add,set} … logging=yes
/system/logging add topics=container action=…
Having done so, we have a new limitation to contend with: RouterOS logging isn’t as powerful as the Docker “logs
” command, which by default works as if you asked it, “Tell me what this particular container logged since the last time I asked.” RouterOS logging, on the other hand, mixes everything together in real time, requiring you to dig through the history manually.
(The same is true of podman logs
, except that it ties into systemd’s unified “journal” subsystem, a controversial design choice that ended up paying off handsomely when Podman came along and wanted to pull up per-container logs to match the way Docker behaved.)
CPU Limitations
This limitation comes in two subclasses:
There Is No Built-In CPU Emulation
Docker lets you run an image built for another architecture on your local system through transparent CPU emulation. If you are on an x86_64 host, this command should drop you into an Alpine shell:
$ docker run --rm -it --platform linux/arm64 alpine:latest
The same will work on recent versions of Podman, and you can get it to work on old versions of Podman with a bit of manual setup.15
For that to work under container.npk
, the RouterOS developers would have to ship the QEMU and Linux kernel binfmt_misc
bridges needed to get the OS to accept these “foreign” binaries. Since it would approximately double the size of RouterOS to do this for all the popular CPU architectures, they naturally chose not to do this.
What this means in practice is that you have to be sure the images you want to use were built for the CPU type in your RouterOS device. This is true even between closely-related platforms. An ARM64 router won’t run a 32-bit ARMv7 image, if only because it will assume a 32-bit Linux kernel syscall interface.
There is an exception: you can ship your own CPU emulation. Take this thread, for example, which describes a container that bundles the 32-bit Intel-compiled netinstall-cli
Linux binary along with an ARM build of of qemu-i386
so that it will run on ARM RouterOS boxes. For a process that isn’t CPU-bound — and NetInstall is very much I/O-bound — this can be a reasonable solution as long as you’re willing to pay the ~4 megs the emulator takes up.
It Only Supports Intel and ARM
MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, most recently as of this writing in their CRS518-16XS-2XQ-RM. Atop that, there are other CPU architectures in the historical mix like PowerPC and TILE. MikroTik doesn’t ship a container.npk
for any of these platforms.
But why not?
To bring up each new build target, the creators of your container build toolchain of choice must bring together:
- a QEMU emulator for the target system
- a sufficiently complete Linux distro ported to that target
- the
binfmt_misc
kernel modules that tie these two together
QEMU is “easy” in the sense that the hard work has already been done; there are QEMU emulators for every CPU type MikroTik ever shipped. (Details) There’s a partial exception with TILE, which once existed in QEMU core but has been removed for years, following the removal of TILE support from the Linux kernel. The thing is, TILE hasn’t progressed in the meantime, so bringing up a QEMU TILE emulator should be a matter of putting in the work to port it to a decade-newer version of Linux.
The binfmt piece is also easy enough.
That leaves the Linux distros for the target platforms used as container base images. That’s the true sticking point.
One of the most powerful ideas in the OCI container ecosphere is that you don’t cross-compile programs, you boot an existing Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries, which the binfmt_misc
piece then turns back around and runs under QEMU again.
It’s a lot of work to get a single new Linux distro working under buildx
, even if you start with an existing third-party port such as the Mac PPC builds of Ubuntu. Good luck if you want to support an oddball CPU like TILE, though.
But then, having done so, you’re in a fresh jam when you try to rebuild an existing container that says “FROM
” something else; ubi9
, for instance. Do you repeat all that porting work for RHEL’s UBI, or do you expend the lesser effort to port the container from RHEL to the Ubuntu image base you already have?
Then you come across one of the huge number of containers based on Alpine, and you’re back in the soup again. While its CPU support list is broader than the one for Ubuntu, there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?
Then there’s Debian, another popular OCI image base, one that’s been ported to a lot of strange platforms, but chances are that it was someone’s wild project, now abandoned. It’s likely the APT package repo isn’t working any more, for one, because who wants to host a huge set of packages for a dead project?
In brief, the reason MikroTik doesn’t ship container.npk
for 32-bit PPC, 32-bit MIPS, and TILE is that there are few Linux distro images in OCI format to use as base images, and it isn’t greatly in their interest to pull that together along with the QEMU and binfmt_misc
pieces for you, nor is it in the financial interest of Docker, Podman, etc.
There’s nothing stopping anyone reading this that has the skill and motivation to do this from doing so, but you’ll have to prove out your containers under emulation. Not until then do I see MikroTik being forced to take notice and provide a build of container.npk
for that platform. It’s not quite a classic chicken-and-egg situation, but I can’t ignore the hiss of radio silence I got in response to this challenge on the forum.
Until someone breaks this logjam, it’s fair enough to say that RouterOS’s container runner only supports ARM and Intel CPUs.
Top-Level Commands
So ends my coverage of the heavy points. Everything else we can touch on briefly, often by reference to matters covered previously.
For lack of any better organization principle, I’ve chosen to cover the remaining docker
CLI commands in alphabetical order. Because Podman cloned the Docker CLI, this ordering matches up fairly well with its top-level command structure as well, the primary exception being that I do not currently go into any of Podman’s pure extensions, ones such as its eponymous pod
command.
build
/buildx
RouterOS provides a bare-bones container runtime only, not any of the image building toolchain.
commit
Given the global limitations, it should be no surprise that RouterOS has no way to commit changes made to the current image layer to a new layer.
compose
RouterOS completely lacks multi-container orchestration features, including lightweight single-box ones like Compose or Kind virtual clusters.
create
/load
cp
RouterOS does let you mount a volume inside a container, then use the regular /file
facility to copy files in under that volume’s mount point, but this is not at all the same thing as the “docker cp
” command. There is no way to overwrite in-container files with external data short of rebuilding the container or using in-container mechanisms like /bin/sh
to do the copying for you.
If you come from a Docker or Podman background, their local overlay image stores might lead you into thinking you could drill down into the GUID-named “container store” directories visible under /file
and perform ad hoc administration operations like overwriting existing config files inside the container, but alas, it does not.
diff
With neither a local image cache nor a CoW file system to provide the baseline, there can be no equivalent command.
events
RouterOS doesn’t support container events.
exec
There is no way in RouterOS to execute a command inside a running container short of /container/shell
, which of course only works if there is a /bin/sh
inside the container.
export
/save
There is no way to produce a tarball of a running container’s filesystem or to save its state back to an OCI image tarball.
The documented advice for getting such a tarball is to do this on the PC side via docker
commands, then upload the tarball from the PC to the RouterOS device.
history
RouterOS doesn’t keep this information.
image
/images
The lack of a build toolchain means there is no sensible equivalent for the “docker image build
” subcommand.
The rest of the missing subcommands are explained by the lack of a local image cache:
history
import
/load
/save
ls
prune
rm
/rmi
tag
tree
The few remaining subcommands are implicitly covered elsewhere: inspect
and push/pull
.
import
This is /container/add file=oci-image.tar
in RouterOS.
info
With the understanding that RouterOS has far fewer configurables than a big-boy container engine, the closest command to this in RouterOS is /container/config/print
. The output is in typical RouterOS “print” format, not JSON.
inspect
The closest approximation to this in RouterOS is
/container/print detail where …
You get only a few lines of information back from this, mainly what you gave it to create the container from the image. You will not get the pages of JSON data the Docker CLI gives.
A related limitation is that the configurable items are often global in RouterOS, set for all containers running on the box, not available to be set on a per-container basis. A good example of this is the memory limit, set via /container/config/set ram-high=…
.
kill
/stop
RouterOS doesn’t make a distinction between “kill” and “stop”. The /container/stop
command behaves more like docker kill
or docker stop -t0
in that it doesn’t try to bring the container down gracefully before giving up and killing it.
login
/logout
RouterOS only allows you to configure a single image registry, including the login parameters:
/container/config/set registry-url=… username=… password=…
The only way to “log out” is to overwrite the username and password via:
/container/config/set username="" password=""
logs
pause
/unpause
No such feature in RouterOS; a container is running or not.
If the container has a shell, you could try a command sequence like this to get the pause effect:
> /container/shell 0
$ pkill -STOP 'name of entrypoint'
If that worked, sending a CONT
signal will unpause the process.
port
RouterOS exposes all ports defined for a container in the EXPOSE
directive in the Dockerfile
. The only ways to instantiate a container with fewer exposed ports are to either rebuild it with a different EXPOSE
value or to create a derived container with the FROM
directive and set a new EXPOSE
value.
(See also the discussion of --publish
above.)
run
ps
/stats
/top
The closest thing in RouterOS is the /container/print follow*
commands.
A more direct alternative would be to shell into the container and run whatever it has for a top
command, but of course that is contingent on any of that being available.
push
/pull
RouterOS maintains no local image cache, thus cannot push or pull images.
While it can pull from an OCI image repo, it does so as part of /container/add
, which is closer to a docker create
command than to docker pull
.
There is no equivalent at all to docker push
.
rename
RouterOS doesn’t let you set the name on creation, much less rename it later. The closest you can come to this is to add a custom comment
, which you can both set at “add
” time and after creation.
restart
This shortcut for stop
followed by start
doesn’t exist.
It often ends up being more complex than that because the stop
operation is asynchronous. There are no flags to make it block until the container does stop, nor a way to set a timeout on it, after which it kills the container outright, as you get with the big-boy engines. You are likely to need a polling loop to wait until the running container’s state transitions to “stopped” before calling /container/start
on it.
See also --restart
above.
rm
RouterOS spells this /container/remove
, but do be aware, there is no equivalent for docker rm -f
to force the removal of a running container. RouterOS makes you stop it first.
Another knock-on effect to be aware of stems from the lack of a local image cache: removing a container and reinstalling it from the same remote image requires RouterOS to re-download the image, even when done back-to-back, even if you never start the container between and thereby cause it to make changes to the expanded image’s files. You can end up hitting annoying rate-limiting on the “free” registries in the middle of a hot-and-heavy debugging session due to this. Ask me how I know. 😁
The solution is to produce an OCI image tarball in the format subset that /container/add file=…
will accept.
But that brings up a new limitation worth mentioning: container.npk
isn’t 100% OCI-compliant. It can’t handle multi-platform image tarballs, for one. You have to give the matching --platform
option when downloading the tarball to get something container.npk
will accept.
search
There is no equivalent to this in RouterOS. You will need to connect to your image registry of choice and use its search engine.
secret
This typically shows up as part of Docker Swarm, Kubernetes, or Podman pods, none of which exists under RouterOS, which is why it shouldn’t surprise you that RouterOS has no secret-sharing facility. The standard fallbacks for this are passed-in environment variables or bind-mounted volumes.
start
RouterOS has /container/start
, with limitations you can reasonably infer from the rest of this article.
swarm
Extending from the lack of single-box container orchestration features, RouterOS also completely lacks a cluster orchestration feature, not even a lightweight one like Docker Swarm or k3s, and it certainly doesn’t support the behemoth that is Kubernetes.
tag
RouterOS does nothing more with tags than to select which image to download from a registry. Without a local image cache, you cannot re-tag an image.
update
There is no equivalent short of this:
/container/stop 0
…wait for it to stop…
/container/remove 0
/container/add …
The last step is the tricky one since /container/print
shows most but not all of the options you gave to create it. If you didn’t write down how you did that, you’re going to have to work that out to complete the command sequence.
version
While RouterOS’s container.npk
technically does have an independent version number of its own, it is meant to always match that of the routeros.npk
package you have installed. RouterOS automatically upgrades both in lock-step, making this the closest equivalent command:
/system/package/print
wait
The closest equivalent to this would be to call /container/stop
in a RouterOS script and then poll on /container/print where …
until it stopped.
- ^ Podman, LXC/LXD, etc.
- ^
Version 27.1.1, according to
dnf remove docker-ce…
after installing these packages per the instructions. Note also that this is the “engine” alone, leaving out the extra gigabyte of stuff that makes up Docker Desktop. This is what you’d run on a remote server, the closest situation to what a headless RouterOS box provides. - ^
This is essentially Docker Engine minus the build tooling. The size is for version 2.0.0-rc1 of
nerdctl
plus thecontainerd
from the Docker Engine CE install above, according tosudo dnf remove containerd
anddu -sh nerdctl
. - ^
Version 4.9.4 on EL9, according to
sudo dnf remove podman conmon crun
. - ^
Version 7.15.2, according to
/system/package/print
. - ^ This is not a verified fact, but an inference based on the observation that if RouterOS did have this facility underlying its containers, several other limitations covered here would not exist.
- ^ The only configurable resource limit is on maximum RAM usage, and it’s global, not settable on a per-container basis.
- ^
Not unless RouterOS itself sees the USB device, as with storage media, which you can bind-mount into the container with “
/container/add mounts=…
”. - ^
containerd
in modern setups,dockerd
in old ones - ^
This is the runner underpinning
containerd
, thus also Docker, although it precedes it. Long before they createdcontainerd
, it underpinneddockerd
instead. Because it is so primordial, a good many other container engines are also based on it. - ^
This is the bare-bones OCI image runner built into systemd, with a feature set fairly close to that of
container.npk
. The size above is for version 252 of this program’s parentsystemd-container
package as shipped on EL9. - ^
This is Podman’s alternative to
runc
, written in C to make it smaller. Early versions of Podman once relied onrunc
, and it can still be configured to use it, but the new default is to use the slimmer but feature-equivalentcrun
. - ^
Yes, for certain. I tested the GNU, BSD, and BusyBox implementations of
ls
, and they all do this. - ^ Indeed, all of my public containers elide the shell for this reason.
- ^
It’s off-topic to go into the details here, but it amounts to “
podman machine ssh
” followed by a “dnf install qemu-static-*
” command.