MikroTik Solutions

Bridged Container VETH
Login

Bridged Container VETH

Motivation

The recommended container networking setup in MikroTik’s docs has you putting your containers on a secondary software bridge under a separate subnet, then setting up a source NAT scheme to convert those in-container IP addresses to LAN-side addresses. This has a number of downsides:

  1. NAT has inherent issues and limitations.

  2. If the container host is a router, it may have another NAT layer above this, which then means that if the container is exposed to the outer network (e.g. port-forwarded to the Internet) you’ve likely bought yourself one or more of the common double NAT problems.

  3. NAT takes a tiny amount of processing power to manage. Your RouterOS device and configuration may allow it to be hardware-offloaded, but even then, there are cases where we want zero overhead, as when the container is running a speed test program.

Given this list, you may be wondering why MikroTik recommends that you put a container’s veth behind NAT in the first place?

It’s my belief that the original reasoning behind this is that Docker does this,1 even though RouterOS’s container feature is not based on any Docker code. They mimicked this aspect of Docker’s design without considering whether that makes sense on a router, where we have complete control over the local network design. The NAT default on Docker has better justification since it is running on a desktop or server computer with a preexisting network setup, and they want it to “just work” out of the box.

That logic may play to some extent as a RouterOS container default, but it is my opinion that if you’re going to run a container on a router — but not a switch — you’d best be thinking about network configuration details regardless.

Solution

My alternative doesn’t have any of the above listed problems:

$ ssh myrouter
> /interface/veth
  add address=192.168.88.2/24 gateway=192.168.88.1 name=veth1
> /interface bridge port
  add bridge=bridge1 interface=veth1

That is, we put veth1 directly on the bridge, giving it an IP in the same scheme as the rest of the bridged ports. We’re using RouterOS’s default LAN-side IP scheme of 192.168.88.0/24 for the purposes of this example, but it can be anything you like.

Consequences

There are a few potential pitfalls to be aware of with this scheme:2

1. Multiple MACs over a Single WiFi Link

WiFi is a point-to-point medium designed with the assumption that each client presents one wireless MAC to the AP, uniquely identifying itself. The AP uses that to get reply packets back to a specific client.

This is quite unlike with wired Ethernet switching, where a given port might be plugged into a deep fan-out of other Ethernet switches, each with their own multiplicity of clients. You can expect even a cheap consumer-grade switch to have at least an 8192 entry MAC table, because even a lowly 5-port office switch can come to learn about thousands of MACs as traffic flows through it.

When you bridge a container to a WiFi network using this article’s technique, it presents the VETH’s assigned MAC address to the AP as a second client MAC, which is liable to confuse it. The typical symptom is that the bridged client becomes very slow due to the AP thrashing about trying to work out how to get reply packets back over a link that presents two different client MACs.

Do not attempt to apply this article’s solution over WiFi. Either follow MikroTik’s stock advice of interposing a NAT layer — thus hiding the container’s MAC behind the router’s actual WiFi-facing MAC — or switch to wired Ethernet.

2. Software Bridging

L2 bridge hardware bridge offloading only works when all interfaces are hardware ones. When you add a purely software interface like a VETH to a bridge, RouterOS must resort to a software bridge, thereby forcing all traffic across the CPU.

This is perfectly fine on proper “router” class hardware, since running all traffic through the CPU is likely what you wanted anyway. It is certainly the case with my iperf3 container, which inspired this article.

The problem comes when you were relying on hardware offloading to get reasonable speed, as with a “switch” class device. Here, following MikroTik’s advice to attach the container to a dedicated software bridge is advisable to the point of being near-mandatory. Otherwise, you turn your switch into a router with a weak CPU, a sad thing indeed.

License

This work is © 2024-2025 by Warren Young and is licensed under CC BY-NC-SA 4.0


  1. ^ …evidenced by their use of 172.17.0.0/24 for this, a subset of Docker’s default NAT IP scheme.
  2. ^ No longer a problem as of RouterOS 7.20 is the VETH getting a random MAC. This is another case of mimicking Docker behavior, with the consequence that the VETH might end up with a lower MAC than the bridge’s default, which can then screw up schemes like static DHCP assignments when you have the auto-mac setting enabled. Now the VETH gets the bridge’s MAC+1, preventing that.