This is the eighteenth part of the Availability Anywhere series. For your convenience you can find other parts in the table of contents in Part 1 – Connecting to SSH tunnel automatically in Windows
Let’s say that you run a project with Docker that uses a webserver (like a frontend application). This application will need to talk to some other service, like backend or database. We already know hw to forward port from host to the container. This way we can host whatever service locally, and talk to it from the container. We can also expose the webserver port (be it 8080) to the host, so we can connect to it from our browser running locally.
Let’s now consider similar problem but in the opposite direction. Let’s say that we have multiple webservers running in multiple dockers, and we want to be able to connect to all of them simultaneously from the host. How to do that?
The problem we typically face is how we bind ports. When using docker, we typically do the following:
docker -p 8080:80
with docker-compose we would do this:
However, we can’t expose multiple applications this way because only one binding is allowed. Technically, lines above try to bind port
80 in the container to the port
8080 on the default IP address of the Docker daemon (which is
0.0.0.0 for docker-compose. This effectively binds all available interfaces we have. So, how to deal with this? Let’s see multiple steps we can take.
First step – changing port bindings
That’s the easiest solution. Just change port bindings (to something like
8081:80) and off you go. The problem is that you need to keep a mental map of the bindings, you may need to change the connection strings, and you need to remember not to push these changes to the repository. The last one is especially tricky because you need to change the files for docker-compose that normally you want push to the repository.
Second step – using environment variables
12-Factor App Methodology tells us to use environment variables as much as possible. Based on that, we should change docker commands and docker-compose files to accept environment variables for port bindings. Since we can put these values in
.env file and not push them to the repository, we can effectively change our configuration easily without changing the source code. However, since the connection strings are often in files that we need to push to the repository, this may still be not enough.
Unfortunately, this won’t work well if we decide to run the frontend locally (instead of running it in the docker). Since the frontend connects to the default port, this port won’t be available on the host (or will be mapped to some other application).
Third step – use separate network interfaces
Separate network interfaces can give us even more. When we bind ports like
8080:80, we actually bind them as in
0.0.0.0:8080:80. We already mentioned that
0.0.0.0 means “bind all interfaces”. However, we can change this to some other IP address. The question is which one to use.
Windows lets us create new loopback network interface with
hdwwiz.exe. See the last section of this post for the instructions how to do that. Linux and Mac let us do similar with aliases and new loopback interfaces. What’s more, we can assign some new IP address to the interface (like
192.168.123.5) and use
hosts file to map subdomain like
MYPROJECT.localhost to that address.
This way we can easily separate the traffic. We now bind all ports to
192.168.123.5:80:80, change connection strings to
sql://MYPROJECT.localhost:PORT, and finally connect to the application from the browser easily on
MYPROJECT.localhost:80. We can also configure Multi-Account Containers in Firefox or FoxyProxy in Firefox/Chrome to connect via Socks proxy forwarded from the docker container, so we would just connect as we were in the docker.
The cool part is that this will work the same way regardless of running that in the docker or locally. If we run inside the docker, then
MYPROJECT.localhost:PORT will resolve to something like
127.0.0.1:PORT which works well because we forward port from the host to the docker container. However, if we run it locally, then it will resolve to
192.168.123.5:PORT that is accessible from the host thanks to the interface.
What’s more, we can modify docker-compose and docker commands transparently and push changes to the repository. Other developers on the team won’t be affected, and we will be able to separate traffic from multiple projects locally. If some other developer doesn’t have new interface configured and hosts file modified, then
MYPROJECT.localhost will resolve to
127.0.0.1 and will still work.
Creating new network interface with IP address
Install the hardware that I manually select from the list (Advanced)
Microsoft -> Microsoft KM-TEST Loopback Adapter
Go to network adapters. Rename the new adapter to MYPROJECT. Manually set the IP address to something like
192.168.123.5. Set mask to
255.255.255.255. Set DNS IP to
C:\windows\system32\drivers\etc\hosts and add the following line:
ifconfig lo:1 192.168.123.5/24
ifconfig lo0 alias 192.168.123.5