Files in directory /tinyproxy in any check-in
- Dockerfile
- Makefile
- README.md
- resolv.conf
- tinyproxy.conf
The Dockerfile
builds a single static
binary of Tinyproxy and sets it to run in the foreground by default.
Its small (~0.2 MiB) size makes it ideal for creating simple reverse
proxy setups on resource-constrained MikroTik RouterOS boxes. As such,
it’s built for both 32-bit and 64-bit ARM CPUs, but also for Intel
platforms, because why not?
The current build requires RouterOS 7.10 or higher due to improvements they’ve made in OCI container compatibility.
See the Makefile
for further details on building,
configuring, and running this container.
This container does not run without a /etc/tinyproxy.conf
file. I suggest that rather than copy it inside
the container that you keep it outside and map it in via the runtime’s volumes/mount system. You
will have to prepare this ahead of time. I’ve included a sample that
sets Tinyproxy up as a reverse proxy for a hard-coded HTTP-only Internet
host selected purely for demo purposes; you will doubtless need to
reconfigure this to meet your local needs. In addition to the program’s
documentation, you may find the heavily-commented stock
configuration file helpful.
The provided resolv.conf
file is needed under RouterOS only, since its
bare-bones runtime does not inject a DNS configuration into
the running container, as the big-boy runtimes do. Unless you are using
MikroTik’s default IP scheme and have a caching DNS server accepting
external requests on that IP, you will have to change the single line in
this file to point to a DNS server if you wish to give the proxy domain
names in URLs.
Simple Method
Start by installing the container package per MikroTik’s docs, then say:
$ scp resolv.conf tinyproxy.conf myrouter:
$ ssh myrouter
> /file
> set name=disk1/etc/resolv.conf resolv.conf
> set name=disk1/etc/tinyproxy.conf tinyproxy.conf
> /container
> mounts/add name=tpconf src=disk1/etc dst=/etc
> add remote-image=tangentsoft/tinyproxy:latest \
interface=veth1 \
start-on-boot=yes \
mounts=tpconf \
logging=yes
> start 0
If your device is configured as a router with a firewall, you may need to add something like this:
/ip firewall filter
add place-before=0 protocol=tcp dst-port=8888 \
action=accept chain=forward out-bridge-port=veth1
Yes, believe it or not, containers on RouterOS are accessed through the “forward” chain even when they’re bound to the main bridge. Moving this rule to the “input” chain will cause it to have no effect.
Remote Tarball Method
If you need to install the container via an image tarball, the simplest way to fetch it is:
$ docker pull --platform linux/arm/v7 tangentsoft/tinyproxy:latest
$ docker image save tangentsoft/tinyproxy:latest > tinyproxy.tar
$ scp tinyproxy.tar myrouter:
That assumes you’ve got a 32-bit ARM based router such as the RB4011.
For 64-bit routers, make the --platform
argument linux/arm64
instead.
Source Method
You can instead build the container from this source repo:
$ fossil clone https://tangentsoft.com/mikrotik
$ cd mikrotik/tinyproxy
$ make PLATFORMS=linux/arm64 && scp tinyproxy.tar myrouter:
Explicitly setting the PLATFORM
like that causes it to build for that
one CPU type, not all four as it will by default. That not only makes
the build go much faster,1 it is necessary to
make the tarball unpack on RouterOS, which doesn’t currently understand
how to disentangle multi-platform image tarballs.
You can use any platform name here supported by your container builder. We prefer Docker for this since although cross-compilation can be done out of the box as of Podman 5, it’s fiddly to get working.
We’re using the skopeo
tool
from the Podman project to fix up some formatting issues in the image
tarball which RouterOS is currently interolerant of when exporting the
image archive tarball this way, at least as of ROS 7.17beta4.
- ^ And not merely 4× faster as you might assume, since non-native builds under Docker go through a QEMU emulation layer, meaning only the native build runs at native speed.