100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
|
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
|
-
+
-
+
|
sh-5.1# <do something inside the container>
sh-5.1# exit
…may end up expressed under RouterOS as…
> /container
> add remote-image=alpine:latest veth=veth1 entrypoint=sleep cmd=3600
… wait for it to download and extract by polling …
… poll, waiting on it to download & extract …
> print
… nope, not ready, wait some more …
> print
… nope, wait some more …
… nope, wait still longer …
> print
… oh, good, now we have the container ID …
> start 0
… wait for it to launch …
> shell 0
sh-5.1# <do something inside the container>
sh-5.1# exit
|
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
|
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
|
-
+
|
The same will work on recent versions of Podman, and you can get it to work on old versions of Podman with a bit of manual setup.(^It’s off-topic to go into the details here, but it amounts to “`podman machine ssh`” followed by a “`dnf install qemu-static-*`” command.)
For that to work under `container.npk`, the RouterOS developers would have to ship the QEMU and Linux kernel [`binfmt_misc`](https://en.wikipedia.org/wiki/Binfmt_misc) bridges needed to get the OS to accept these “foreign” binaries. Since it would approximately double the size of RouterOS to do this for all the popular CPU architectures, they naturally chose _not_ to do this.
What this means in practice is that you have to be sure the images you want to use were built for the CPU type in your RouterOS device. This is true even between closely-related platforms. An ARM64 router won’t run a 32-bit ARMv7 image, if only because it will assume a 32-bit Linux kernel syscall interface.
<a id="qemu"></a>There is an exception: you can ship your own CPU emulation. Take [this thread](https://forum.mikrotik.com/viewtopic.php?t=189485), for example, which describes a container that bundles the 32-bit Intel-compiled `netinstall-cli` Linux binary along with an ARM build of of `qemu-i386` so that it will run on ARM RouterOS boxes. For a process that isn’t CPU-bound — and netinstall is very much I/O-bound — this can be a reasonable solution as long as you’re willing to pay the ~4 megs the emulator takes up.
<a id="qemu"></a>There is an exception: you can ship your own CPU emulation. Take [this thread](https://forum.mikrotik.com/viewtopic.php?t=189485), for example, which describes a container that bundles the 32-bit Intel-compiled `netinstall-cli` Linux binary along with an ARM build of of `qemu-i386` so that it will run on ARM RouterOS boxes. For a process that isn’t CPU-bound — and NetInstall is very much I/O-bound — this can be a reasonable solution as long as you’re willing to pay the ~4 megs the emulator takes up.
## <a id="compat"></a>It Only Supports Intel and ARM
MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, most recently as of this writing in their [CRS518-16XS-2XQ-RM](https://mikrotik.com/product/crs518_16xs_2xq). Atop that, there are other CPU architectures in the historical mix like PowerPC and TILE. MikroTik doesn’t ship a `container.npk` for any of these platforms.
But why not?
|
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
|
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
|
-
+
|
You’re free to do that all day long, but nothing you do will force a MIPSBE build of `container.npk` to run the resulting binaries short of including a CPU emulator in the image, [per above](#qemu).
You may then point out that you don’t actually need the cross-compilation toolchain to exist in Docker proper. FOSS toolchains do exist for TILE, 32-bit PPC, MMIPS, SMIPS… Why can’t you use them to cross-compile binaries on your desktop machine and then use tools like [Buildah](https://buildah.io/) to copy those binaries into the image unchanged?
You can, but now you’ve bought several new problems:
1. Until someone actually does this and provides a compelling case to MikroTik that they should expend the effort to build `container.npk` for those old CPU designs, my sense of MikroTik’s public statements on this matter is that they have no interest in spending the engineering time. It not quite a classic chicken-and-egg situation, but without working images in hand, I don’t see a bid to make MikroTik append this task to its development pipeline succeeding.(^I base that interpretation on the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351).)
1. Until someone actually does this and provides a compelling case to MikroTik that they should expend the effort to build `container.npk` for those old CPU designs, my sense of MikroTik’s public statements on this matter is that they have no interest in spending the engineering time. It’s not quite a classic chicken-and-egg situation, but without working images in hand, I don’t see a bid to make MikroTik append this task to its development pipeline succeeding.(^I base that interpretation on the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351).)
2. It’s only “easy” for static binaries. If your program requires third-party shared libraries, you have to build them, too, along with the dynamic link loader, and… You’re likely to find yourself building a small Linux distribution as dependency leads to dependency, and now you’re off on a major [yak-shaving](https://softwareengineering.stackexchange.com/q/388092/39162) expedition.
3. This plan ignores one of the most powerful ideas in the OCI container ecosphere: you don’t cross-compile programs at all, you boot a Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries. What this means in practice is that if you can assemble:
* a sufficiently complete Linux distro ported to your target platform; _and_
* a version of QEMU that will run it on your development system; _and_
|