MikroTik Solutions

Changes To Container Limitations
Login

Changes To Container Limitations

Changes to "Container Limitations" between 2024-07-27 01:29:30 and 2024-07-27 01:52:47

211
212
213
214
215
216
217
218

219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234

235
236
237
238
239
240
241
211
212
213
214
215
216
217

218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233

234
235
236
237
238
239
240
241







-
+















-
+








But why not?

The simple reason is, the major OCI image build toolchains — mainly `docker buildx` and its clones — don’t target these other CPU types, a fact you can verify for yourself by running…

    $ docker run --privileged --rm tonistiigi/binfmt

“But **a-hah**,” you cry! “There's MIPS *and* PPC in that list!” You then demand, “Where’s my build of `container.npk` for them, then?”
“But **a-hah**,” you cry! “Theres MIPS *and* PPC in that list!” You then demand, “Where’s my build of `container.npk` for them, then?”

Did you overlook the “64” in those outputs? These are for modern 64-bit versions of these CPUs, but MikroTik never shipped any 64-bit MIPS or PowerPC CPUs, not even the 100G behemoth linked above, which gets by with a 32-bit MIPSBE based CPU because it is designed as a switch, not a router, offloading nearly all traffic to the switch ASIC.

The only thing the above command tells you is that you can build images using commands like this:

    docker buildx --platform linux/mips64 …

You’re free to do that all day long, but nothing you do will force a MIPSBE build of `container.npk` to run the resulting binaries short of including a CPU emulator in the image, [per above](#qemu).

You may then point out that you don’t actually need the cross-compilation toolchain to exist in Docker proper. FOSS toolchains do exist for TILE, 32-bit PPC, MMIPS, SMIPS… Why can’t you use them to cross-compile binaries on your desktop machine and then use tools like [Buildah](https://buildah.io/) to copy those binaries into the image unchanged?

You can, but now you’ve bought several new problems:

1. Until someone actually does this and provides a compelling case to MikroTik that they should expend the effort to build `container.npk` for those old CPU designs, my sense of MikroTik’s public statements on this matter is that they have no interest in spending the engineering time. It not quite a classic chicken-and-egg situation, but without working images in hand, I don’t see a bid to make MikroTik append this task to its development pipeline succeeding.(^I base that interpretation on the hiss of radio silence I got in response to [this challenge](https://forum.mikrotik.com/viewtopic.php?t=204868#p1058351).)

2. It’s only “easy” for static binaries. If your program requires third-party shared libraries, you have to build them, too, along with the dynamic link loader, and… You’re likely to find yourself building a small Linux distribution as dependency leads to dependency, and now you're off on a major [yak-shaving](https://softwareengineering.stackexchange.com/q/388092/39162) expedition.
2. It’s only “easy” for static binaries. If your program requires third-party shared libraries, you have to build them, too, along with the dynamic link loader, and… You’re likely to find yourself building a small Linux distribution as dependency leads to dependency, and now youre off on a major [yak-shaving](https://softwareengineering.stackexchange.com/q/388092/39162) expedition.

3. This plan ignores one of the most powerful ideas in the OCI container ecosphere: you don’t cross-compile programs at all, you boot a Linux distro image for the target platform under QEMU, then use the native tooling to produce “native” binaries. What this means in practice is that if you can assemble:

    * a sufficiently complete Linux distro ported to your target platform; _and_
    * a version of QEMU that will run it on your development system; _and_
    * the binfmt patches necessary to let your local kernel tie the two together…

396
397
398
399
400
401
402




403

404
405
406
407
408
409
410
396
397
398
399
400
401
402
403
404
405
406

407
408
409
410
411
412
413
414







+
+
+
+
-
+







## <a id="rename"></a>`rename`

RouterOS doesn’t let you set the name on creation, much less rename it later. The closest you can come to this is to add a custom `comment`, which you can both set at “`add`” time and after creation.


## <a id="restart"></a>`restart`

This shortcut for [`stop`](#stop) followed by [`start`](#start) doesn’t exist.

It often ends up being more complex than that because the `stop` operation is asynchronous. There are no flags to make it block until the container does stop, nor a way to set a timeout on it, after which it kills the container outright, as you get with the big-boy engines. You are likely to need a polling loop to wait until the running container’s state transitions to “stopped” before calling `/container/start` on it.

See [`--restart`](#restart) above.
See also [`--restart`](#restart) above.


## <a id="rm"></a>`rm`

RouterOS spells this `/container/remove`, but do be aware, there is no equivalent for `docker rm -f` to force the removal of a running container. RouterOS makes you stop it first.

Another knock-on effect to be aware of stems from the lack of a local image cache: removing a container and reinstalling it from the *same* remote image requires RouterOS to re-download the image, even when done back-to-back, even if you never start the container between and thereby cause it to make changes to the expanded image’s files. You can end up hitting annoying rate-limiting on the “free” registries in the middle of a hot-and-heavy debugging session due to this. Ask me how I know. 😁