MikroTik Solutions

Changes To Container Limitations
Login

Changes To Container Limitations

Changes to "Container Limitations" between 2024-07-27 04:22:31 and 2024-07-28 07:57:33

208
209
210
211
212
213
214
215

216
217
218
219



220
221
222

223
224
225
226

227
228
229
230

231
232
233

234
235
236

237
238
239
240
241
242

243
244
245

246
247

248
249
250

251
252
253
254

255
256
257
258
259
260
261
208
209
210
211
212
213
214

215
216



217
218
219
220


221

222


223

224


225

226

227
228


229



230


231

232

233
234

235
236


237


238

239
240
241
242
243
244
245
246






-
+

-
-
-
+
+
+

-
-
+
-

-
-
+
-

-
-
+
-

-
+

-
-
+
-
-
-

-
-
+
-

-
+

-
+

-
-
+
-
-

-
+







## <a id="compat"></a>It Only Supports Intel and ARM

MikroTik has shipped an awful lot of MIPS-based product over the years, and it continues to do so, most recently as of this writing in their [CRS518-16XS-2XQ-RM](https://mikrotik.com/product/crs518_16xs_2xq). Atop that, there are other CPU architectures in the historical mix like PowerPC and TILE. MikroTik doesn’t ship a `container.npk` for any of these platforms.

But why not?

The simple reason is, the major [OCI] image build toolchains — mainly `docker buildx` and its clones — don’t target these other CPU types, a fact you can verify for yourself by running…
To bring up each new build target, the creators of your container build toolchain of choice must bring together:

    $ docker run --privileged --rm tonistiigi/binfmt

“But **a-hah**,” you cry! “There’s MIPS *and* PPC in that list!” You then demand, “Where’s my build of `container.npk` for them, then?”
* a QEMU emulator for the target system
* a sufficiently complete Linux distro ported to that target
* the `binfmt_misc` kernel modules that tie these two together

Did you overlook the “64” in those outputs? These are for modern 64-bit versions of these CPUs, but MikroTik never shipped any 64-bit MIPS or PowerPC CPUs, not even the 100G behemoth linked above, which gets by with a 32-bit MIPSBE based CPU because it is designed as a switch, not a router, offloading nearly all traffic to the switch ASIC.

QEMU is “easy” in the sense that the hard work has already been done; there are QEMU emulators for every CPU type MikroTik ever shipped. ([Details](https://www.qemu.org/docs/master/system/targets.html)) There’s a partial exception with TILE, which once existed in QEMU core but has been removed for years, following the removal of TILE support from the Linux kernel. The thing is, TILE hasn’t progressed in the meantime, so bringing up a QEMU TILE emulator should be a matter of putting in the work to port it to a decade-newer version of Linux.
The only thing the above command tells you is that you’re free to build images using commands like this:

    docker buildx --platform linux/mips64 …

The binfmt piece is also easy enough.
You can do that all day long, but nothing you do will force a MIPSBE build of `container.npk` to run the resulting binaries short of including a CPU emulator in the image, [per above](#qemu).

You may then point out that you don’t actually need the cross-compilation toolchain to exist in Docker proper. FOSS toolchains do exist for TILE, 32-bit PPC, MMIPS, SMIPS… Why can’t you use them to cross-compile binaries on your desktop machine and then use tools like [Buildah](https://buildah.io/) to copy those binaries into the image unchanged?

That leaves the Linux distros for the target platforms used as container base images. That’s the true sticking point.
You _can_ do that, but it’s only “easy” for static binaries. If your program requires third-party shared libraries, you have to build them, too, along with the dynamic link loader, and whatever other infrastructure it needs. You’re likely to find yourself building a small Linux distribution as dependency leads to dependency, and now you’re off on a major [yak-shaving](https://softwareengineering.stackexchange.com/q/388092/39162) expedition.

There’s a better way. One of the most powerful ideas in the OCI container ecosphere is that you don’t cross-compile programs at all, you boot an _existing_ Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries.
One of the most powerful ideas in the OCI container ecosphere is that you don’t cross-compile programs, you boot an _existing_ Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries, which the `binfmt_misc` piece then turns back around and runs under QEMU again.

What this means in practice is that if you can assemble:

It’s a lot of work to get a single new Linux distro working under `buildx`, even if you start with an existing third-party port such as the Mac PPC builds of Ubuntu. Good luck if you want to support an oddball CPU like TILE, though.
* a sufficiently complete Linux distro ported to your target platform; _and_
* a version of QEMU that will run it on your development system; _and_
* the binfmt patches necessary to let your local kernel tie the two together…

…then you can get the `buildx` tooling to build foreign binaries under all this that will run on your target platform without you ever needing to mess about with cross-compiler toolchains.

But then, having done so, you’re in a fresh jam when you try to rebuild an existing container that says “`FROM`” something else; `ubi9`, for instance. Do you repeat all that porting work for RHEL’s [UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image), or do you expend the lesser effort to port the container from RHEL to the Ubuntu image base you already have?
It’s a lot of work to get a single new Linux distro working under `buildx`, even if you start with an existing third-party port such as the Mac PPC builds of Ubuntu, but having done so, you’re in a fresh jam when you try to rebuild an existing container that says “`FROM`” something else; `ubi9`, for instance. Do you repeat all that porting work for RHEL’s [UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image), or do you expend the lesser effort to port the container from RHEL to Ubuntu? 

A huge number of containers are based on Alpine, but while [its CPU support list](https://wiki.alpinelinux.org/wiki/Requirements) is broader than [the one for Ubuntu](https://ubuntu.com/cpu-compatibility), there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?
Then you come across one of the huge number of containers based on Alpine, and you’re back in the soup again. While [its CPU support list](https://wiki.alpinelinux.org/wiki/Requirements) is broader than [the one for Ubuntu](https://ubuntu.com/cpu-compatibility), there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?

Debian is another popular OCI image base, and it’s been ported to a lot of strange platforms, but chances are that it was someone’s wild project, now abandoned. It’s likely the APT package repo isn’t working any more, for one, because who wants to host a huge set of packages for a dead project?
Then there’s Debian, another popular OCI image base, one that’s been ported to a lot of strange platforms, but chances are that it was someone’s wild project, now abandoned. It’s likely the APT package repo isn’t working any more, for one, because who wants to host a huge set of packages for a dead project?

Thus we have two possible paths to success:

In brief, the reason MikroTik doesn’t ship `container.npk` for 32-bit PPC, 32-bit MIPS, and TILE is that there are few Linux distro images in OCI format to use as base images, and it isn’t greatly in their interest to pull that together along with the QEMU and `binfmt_misc` pieces for you, nor is it in the financial interest of Docker, Podman, etc.
*   cross-compilation from source code and building images by hand with the likes of Buildah; or
*   porting a sufficient subset of the world’s Linux distros to support the containers you want to rebuild

Regardless, it wont be until you have working images in hand that I see MikroTik being forced to take notice and provide a build of `container.npk` for that platform. It’s not quite a classic chicken-and-egg situation, but I can’t ignore the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351) on the forum.
Theres nothing stopping anyone reading this that has the skill and motivation to do this from doing so, but you’ll have to prove out your containers under emulation. Not until then do I see MikroTik being forced to take notice and provide a build of `container.npk` for that platform. It’s not quite a classic chicken-and-egg situation, but I can’t ignore the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351) on the forum.

Until someone breaks this logjam, it’s fair enough to say that RouterOS’s container runner only supports ARM and Intel CPUs.


# <a id="tlc"></a>Remaining Top-Level Commands

So ends my coverage of the heavy points. Everything else we can touch on briefly, often by reference to matters covered previously.