MikroTik Solutions

Changes To Container Limitations
Login

Changes To Container Limitations

Changes to "Container Limitations" between 2024-07-27 02:27:09 and 2024-07-27 02:43:58

216
217
218
219
220
221
222
223

224
225
226
227

228
229
230
231

232
233

234
235
236

237
238
239
240
241



242
243

244
245

246
247

248
249

250

251





252
253
254
255
256
257
258
216
217
218
219
220
221
222

223
224
225
226

227
228
229
230

231
232

233
234


235

236



237
238
239
240

241
242

243
244

245
246

247
248
249

250
251
252
253
254
255
256
257
258
259
260
261







-
+



-
+



-
+

-
+

-
-
+
-

-
-
-
+
+
+

-
+

-
+

-
+

-
+

+
-
+
+
+
+
+








    $ docker run --privileged --rm tonistiigi/binfmt

“But **a-hah**,” you cry! “There’s MIPS *and* PPC in that list!” You then demand, “Where’s my build of `container.npk` for them, then?”

Did you overlook the “64” in those outputs? These are for modern 64-bit versions of these CPUs, but MikroTik never shipped any 64-bit MIPS or PowerPC CPUs, not even the 100G behemoth linked above, which gets by with a 32-bit MIPSBE based CPU because it is designed as a switch, not a router, offloading nearly all traffic to the switch ASIC.

The only thing the above command tells you is that you can build images using commands like this:
The only thing the above command tells you is that you’re free to build images using commands like this:

    docker buildx --platform linux/mips64 …

You’re free to do that all day long, but nothing you do will force a MIPSBE build of `container.npk` to run the resulting binaries short of including a CPU emulator in the image, [per above](#qemu).
You can do that all day long, but nothing you do will force a MIPSBE build of `container.npk` to run the resulting binaries short of including a CPU emulator in the image, [per above](#qemu).

You may then point out that you don’t actually need the cross-compilation toolchain to exist in Docker proper. FOSS toolchains do exist for TILE, 32-bit PPC, MMIPS, SMIPS… Why can’t you use them to cross-compile binaries on your desktop machine and then use tools like [Buildah](https://buildah.io/) to copy those binaries into the image unchanged?

You can, but now you’ve bought several new problems:
You _can_ do that, but it’s only “easy” for static binaries. If your program requires third-party shared libraries, you have to build them, too, along with the dynamic link loader, and whatever other infrastructure it needs. You’re likely to find yourself building a small Linux distribution as dependency leads to dependency, and now you’re off on a major [yak-shaving](https://softwareengineering.stackexchange.com/q/388092/39162) expedition.

1. Until someone actually does this and provides a compelling case to MikroTik that they should expend the effort to build `container.npk` for those old CPU designs, my sense of MikroTik’s public statements on this matter is that they have no interest in spending the engineering time. It’s not quite a classic chicken-and-egg situation, but without working images in hand, I don’t see a bid to make MikroTik append this task to its development pipeline succeeding.(^I base that interpretation on the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351).)
There’s a better way. One of the most powerful ideas in the OCI container ecosphere is that you don’t cross-compile programs at all, you boot an _existing_ Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries.

2. It’s only “easy” for static binaries. If your program requires third-party shared libraries, you have to build them, too, along with the dynamic link loader, and… You’re likely to find yourself building a small Linux distribution as dependency leads to dependency, and now you’re off on a major [yak-shaving](https://softwareengineering.stackexchange.com/q/388092/39162) expedition.

What this means in practice is that if you can assemble:
3. This plan ignores one of the most powerful ideas in the OCI container ecosphere: you don’t cross-compile programs at all, you boot a Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries. What this means in practice is that if you can assemble:

    * a sufficiently complete Linux distro ported to your target platform; _and_
    * a version of QEMU that will run it on your development system; _and_
    * the binfmt patches necessary to let your local kernel tie the two together…
* a sufficiently complete Linux distro ported to your target platform; _and_
* a version of QEMU that will run it on your development system; _and_
* the binfmt patches necessary to let your local kernel tie the two together…

    …then you can get the `buildx` tooling to build foreign binaries under all this that will run on your target platform without you ever needing to mess about with cross-compiler toolchains.
…then you can get the `buildx` tooling to build foreign binaries under all this that will run on your target platform without you ever needing to mess about with cross-compiler toolchains.

4.  It’s a lot of work to get a single new Linux distro working under `buildx`, even if you start with an existing third-party port such as the Mac PPC builds, but having done so, you’re in a fresh jam when you try to rebuild an existing container that says “`FROM`” something else; `ubi9`, for instance. Do you repeat all that porting work for RHEL’s [UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image), or do you expend the lesser effort to port the container from RHEL to Ubuntu? 
It’s a lot of work to get a single new Linux distro working under `buildx`, even if you start with an existing third-party port such as the Mac PPC builds of Ubuntu, but having done so, you’re in a fresh jam when you try to rebuild an existing container that says “`FROM`” something else; `ubi9`, for instance. Do you repeat all that porting work for RHEL’s [UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image), or do you expend the lesser effort to port the container from RHEL to Ubuntu? 

    A huge number of containers are based on Alpine, but while [its CPU support list](https://wiki.alpinelinux.org/wiki/Requirements) is broader than [the one for Ubuntu](https://ubuntu.com/cpu-compatibility), there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?
A huge number of containers are based on Alpine, but while [its CPU support list](https://wiki.alpinelinux.org/wiki/Requirements) is broader than [the one for Ubuntu](https://ubuntu.com/cpu-compatibility), there is no TILE or MIPS at all, and its PPC support is 64-bit only. Are you going to port the Alpine base image and enough of its package repository to get your container building?

    Debian is another popular OCI image base, and it’s been ported to a lot of strange platforms, but chances are that it was someone’s wild project, now abandoned. It’s likely the APT package repo isn’t working any more, for one, because who wants to host a huge set of packages for a dead project?
Debian is another popular OCI image base, and it’s been ported to a lot of strange platforms, but chances are that it was someone’s wild project, now abandoned. It’s likely the APT package repo isn’t working any more, for one, because who wants to host a huge set of packages for a dead project?

Thus we have two possible paths to success:
And so there you have it: the method by which you can take up the challenge I laid out in my forum post above. Find or build a PowerPC or MMIPS or SMIPS or TILE Linux distro, then use _that_ to build OCI images. At that point, MikroTik _might_ be forced to take notice and provide a build of `container.npk` for that platform.

*   cross-compilation from source code and building images by hand with the likes of Buildah; or
*   porting a sufficient subset of the world’s Linux distros to support the containers you want to rebuild

Regardless, it won’t be until you have working images in hand that I see MikroTik being forced to take notice and provide a build of `container.npk` for that platform. It’s not quite a classic chicken-and-egg situation, but I can’t ignore the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351) on the forum.

Until someone breaks this logjam, it’s fair enough to say that RouterOS’s container runner only supports ARM and Intel CPUs.


# <a id="tlc"></a>Remaining Top-Level Commands

So ends my coverage of the heavy points. Everything else we can touch on briefly, often by reference to matters covered previously.