Codementor Events

Linking Docker Containers

Published May 18, 2016Last updated Jan 18, 2017
Linking Docker Containers

Billies-MBP-2:~ billie$  docker run hello-world

Small things, Working Together

I want to talk to you about some tiny things today. Docker containers.

They're super easy to run, and to use to host a single program, and they move between your environments really well with no changes to the code that could cause breakages. You even give the power to the developer, who knows the program best, to create the environment for it to run in.

However, no even moderately complicated system is only a single Docker container.

But how do you get Docker containers to network together?

How Docker Handles Networking

Thankfully Docker mimics other technology you're already familiar with to solve these problems. Much like with Physical Networks and Virtualized Networked Devices, Docker offers you the ability to create networks and place containers within them.

To demonstrate this I'm going to need something in the grand tradition of Blue Peter, and that is a few containers I prepared earlier.

Billies-MBP-2:~ billie$  docker run -p 80:80 --name=flappy quay.io/purplebooth/flappy-endpoint

As you can see from above this is a really simple Docker container that has a web server that returns 200 for all requests, and runs on only port 80.

I'm sure you've seen a command like this before, it tells a Docker container to map port 80 on the Docker host to port 80 on this container. Which means that I can then connect to that port on my VirtualMachine at 192.168.99.100 on port 80, where I will see that web page.

What this command internally is doing is telling the Docker to add a new container to the default bridge network, bridge, and publish the port 80, as connectable.

We can see this by running the following command

Billies-MBP-2:~ billie$  docker network inspect bridge
[
  {
    "Name": "bridge",
    "Id": "b1a20d28732dbb2dd6f833fea71a942ead4c6c17508f8cc1556c830529a16ec3",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
      "Driver": "default",
      "Options": null,
      "Config": [
        {
          "Subnet": "172.17.0.0/16"
        }
      ]
    },
    "Internal": false,
    "Containers": {
      "9a11f72a89242c17377b7f7b87b5bd5eba9bcdfac848f640c068112c7d3d467a": {
        "Name": "flappy",
        "EndpointID": "05b1b68f41fcfa0b0042ce55a65c52f58713a1d090fb185ec93db60be35efad8",
        "MacAddress": "02:42:ac:11:00:02",
        "IPv4Address": "172.17.0.2/16",
        "IPv6Address": ""
      }
    },
    "Options": {
      "com.docker.network.bridge.default_bridge": "true",
      "com.docker.network.bridge.enable_icc": "true",
      "com.docker.network.bridge.enable_ip_masquerade": "true",
      "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
      "com.docker.network.bridge.name": "docker0",
      "com.docker.network.driver.mtu": "1500"
    },
    "Labels": {}
  }
]

Notice how the container flappy is in the Containers section.
If we look inside the Docker container itself we can see how the container sees this.
Run this to get inside the container

Billies-MBP-2:~ billie$  docker exec -it flappy bash
root@9a11f72a8924:/var/www/html#

Now lets look at the network configuration

root@9a11f72a8924:/var/www/html# ip addr      
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link 
       valid_lft forever preferred_lft forever

Here you can see we've been assigned an IP address in the designated range specified in the default network we just saw, 172.17.0.0/16 range: 172.17.0.2/16.

Now you'll notice that this range isn't 172.17.0.1/16. This is because Docker reserves the first IP 172.17.0.1 for the Docker host on that network. Which does the IP forwarding.

If we look at the routes, you'll be able to see we're using that IP address as our gateway to get packets out to the wider network.

root@9a11f72a8924:/var/www/html# ip route list       
default via 172.17.0.1 dev eth0 
172.17.0.0/16 dev eth0  proto kernel  scope link  src 172.17.0.2 

Okay, so that's a lot of words. In practical terms we've just done the following.

Image showing packets coming from a browser, into Docker, then into the container.

Using this for our applications Networking

Okay. Fine but how do I get that container talking to another container I hear you ask.

One of the big benefits of Docker is reuse. So let's use a pre-made Docker container to add SSL to our awesome service. It's just a simple reverse proxy that has a HTTPS endpoint with self signed certificates.

It'll look like this when we're done.

Same network diagram, except in the same network area as we had a single container, we now have two. We're going to add the ability to connect to HTTPS on the same service

Here you can see me using it as a proxy in front of the BBC website.

Billies-MBP-2:~ billie$ docker run -p 443:443 --name ssl-term -e  UPSTREAM=www.bbc.co.uk purplebooth/nginx-ssl-terminator

In order to get this working the first thing we need to do is create a new bridge network for these two containers to live in. The reason that we do this is that by creating our own bridge network is that unlike in the default bridge, we can use DNS to resolve the IP address of any container in the network from just the container name, rather than have to know it.

We do this by running the following command

Billies-MBP-2:~ billie$ docker network create --driver=bridge flappy-public
f8bc98af79a3faed4cb54458d2338e19f0c3c2ed4034acc9491e4073c9e84e08

We now start up the containers in that network

Billies-MBP-2:~ billie$ docker run -d -p 80:80 --net=flappy-public --name=flappy quay.io/purplebooth/flappy-endpoint 
a9d505e98029441a0a65368f3017c70ec814bd1963d8b603fbf5d576c018ea18

Then we simply launch the proxy container with the upstream set to flappy, on the correct network

Billies-MBP-2:~ billie$ docker run -d -p 443:443 --net=flappy-public --name ssl-term -e  UPSTREAM=flappy purplebooth/nginx-ssl-terminator
b069624294b30636a4698d75dece0114ee3e47275fed6b12e6a352f7ee26ec1f

If we run this command we can see that the network has the two containers running inside it

Billies-MBP-2:~ billie$ docker network inspect flappy-public
[
  {
    "Name": "flappy-public",
    "Id": "f8bc98af79a3faed4cb54458d2338e19f0c3c2ed4034acc9491e4073c9e84e08",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
      "Driver": "default",
      "Options": {},
      "Config": [
        {
          "Subnet": "172.18.0.0/16",
          "Gateway": "172.18.0.1/16"
        }
      ]
    },
    "Internal": false,
    "Containers": {
      "a9d505e98029441a0a65368f3017c70ec814bd1963d8b603fbf5d576c018ea18": {
        "Name": "flappy",
        "EndpointID": "df2815c8a4d1ba4197089a011d19005332af08dd6a6d1df2cb578bc29fe4ef90",
        "MacAddress": "02:42:ac:12:00:02",
        "IPv4Address": "172.18.0.2/16",
        "IPv6Address": ""
      },
        "b069624294b30636a4698d75dece0114ee3e47275fed6b12e6a352f7ee26ec1f": {
          "Name": "ssl-term",
          "EndpointID": "fd80b0acd0fc514bb02812452a44d64cc845b58e349a9ac99b9426ab0531a419",
          "MacAddress": "02:42:ac:12:00:03",
          "IPv4Address": "172.18.0.3/16",
          "IPv6Address": ""
        }
    },
    "Options": {},
    "Labels": {}
  }
]

And that's it, we've got our containers talking to each other.

And like magic it works! The IP address of the container is resolved by DNS and we can connect between the two.

Using Networks to Enhance Security

Multiple containers talking in a single network is okay. However when we want our applications to be secure, we need to think about how we control data to and from our container. Currently any container in the flappy-public network can talk to any other, so they could be used as a jumping point to gain access to the rest of the system.

To minimise this risk we're going to give the flappy service it's own network, and put it behind a gateway that has interfaces on both networks that all requests must come through. This gateway will be connected to from the SSL terminator service, and will do the connection to the flappy endpoint itself.

In theory we could use the gateway to implement application firewall too.

Two networks shown, one with SSL Term and the gateway, one with Flappy and the gateway

Let's start by creating a new network.

Billies-MBP-2:~ billie$ docker network create flappy-private
6c95ab9c45b22c6657f8e31997be0314887f6a07537a944aaa10e73d22844e35

And making sure that flappy is only in the private network

Billies-MBP-2:~ billie$ docker network connect flappy-private flappy
Billies-MBP-2:~ billie$ docker network disconnect flappy-public flappy

Then we add the gateway (we'll just reuse the same image as for the SSL terminator for this). This gateway sits on both the Public and Private networks by being attached to both. This works by giving the container two network interfaces.

Billies-MBP-2:~ billie$ docker run -d --net=flappy-public --name network-gate -e  UPSTREAM=flappy purplebooth/nginx-ssl-terminator && docker network connect flappy-private network-gate
92385800ab4f7c6e2ddb25de5a071309bb91acff3b415a25967798c17518f85c

Lets then change the SSL Terminator so it points at the gateway and not the flappy endpoint (which it now won't be able to connect to).

Billies-MBP-2:~ billie$ docker stop ssl-term
ssl-term
Billies-MBP-2:~ billie$ docker rm ssl-term
ssl-term
Billies-MBP-2:~ billie$ docker run -d --net=flappy-public -p 443:443 --name ssl-term -e  UPSTREAM=network-gate purplebooth/nginx-ssl-terminator && docker network connect flappy-private ssl-term
b004ee1aa05d257854cb0a6a96e8ff19a10cb312df5eba1a7a9adb375a52274a

Ta-dah!

Now we're all set up, and the flappy service is isolated on its own network, and is only accessible via the gateway. Lets look at what that looks like to docker.

Billies-MBP-2:~ billie$ docker network inspect flappy-public
[
  {
    "Name": "flappy-public",
    "Id": "eaf3f689e8bb7d0b6b0b9256bb7793029cb4107cdd18936f3cb1844887d9eca2",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
      "Driver": "default",
      "Options": {},
      "Config": [
        {
          "Subnet": "172.19.0.0/16",
          "Gateway": "172.19.0.1/16"
        }
      ]
        },
        "Internal": false,
        "Containers": {
          "92385800ab4f7c6e2ddb25de5a071309bb91acff3b415a25967798c17518f85c": {
            "Name": "network-gate",
            "EndpointID": "9d1b24700d187de7c4196cbc16ee6b61c96358ba679d4576ce94d433ca0b8687",
            "MacAddress": "02:42:ac:13:00:02",
            "IPv4Address": "172.19.0.2/16",
            "IPv6Address": ""
          },
          "b004ee1aa05d257854cb0a6a96e8ff19a10cb312df5eba1a7a9adb375a52274a": {
            "Name": "ssl-term",
            "EndpointID": "9b196a7efad29414c24d3fe84a611a21746ee270d7fabef4b2c501e33c2a72d2",
            "MacAddress": "02:42:ac:13:00:03",
            "IPv4Address": "172.19.0.3/16",
            "IPv6Address": ""
          }
        },
        "Options": {},
        "Labels": {}
    }
]
Billies-MBP-2:~ billie$ docker network inspect flappy-private
[
  {
    "Name": "flappy-private",
    "Id": "6c95ab9c45b22c6657f8e31997be0314887f6a07537a944aaa10e73d22844e35",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
      "Driver": "default",
      "Options": {},
      "Config": [
        {
          "Subnet": "172.18.0.0/16",
          "Gateway": "172.18.0.1/16"
        }
      ]
    },
    "Internal": false,
    "Containers": {
      "6af6e2c91aa13d7274f9ddff37b9cb41d28b63f70670b363e7d64c31cf3b2441": {
        "Name": "flappy",
        "EndpointID": "db41c9156420a6545c8b19a7ada7531ba5cfa77dd1872e1aae0fab712b24d38a",
        "MacAddress": "02:42:ac:12:00:02",
        "IPv4Address": "172.18.0.2/16",
        "IPv6Address": ""
      },
      "92385800ab4f7c6e2ddb25de5a071309bb91acff3b415a25967798c17518f85c": {
        "Name": "network-gate",
        "EndpointID": "d83d612bdde15bf38463534275d11e60ed313e1d422b86fb48590e4c2f513deb",
        "MacAddress": "02:42:ac:12:00:03",
        "IPv4Address": "172.18.0.3/16",
        "IPv6Address": ""
      }
    },
    "Options": {},
    "Labels": {}
  }
]

You'll notice that the gateway container is now in two networks.

This type of setup means that it's impossible to directly connect to the flappy service, and you have opportunities to add additional security in here, and to some extent isolate your service from many kinds of hacking attacks.

Docker networks are a really powerful tool, however this is really just the tip of the iceberg. You are also able to create Overlay networks that can join multiple Docker hosts together, or even write your own networking drivers.

Hopefully this has somewhat demystified connecting containers in Docker!

Discover and read more posts from Billie Thompson
get started
post commentsBe the first to share your opinion
Alessandro Oliveira
6 years ago

Hi Billie, i’m really bugged. Using CTRL + F on this page, you have used “connect flappy-private” three times and those three times one for each container, you have connected these three ones on the same private network. When you do inspect on your networks, does not appears more than two containers connected, i’m asking to my self how it can be possible. It’s just hiting me. Because i’m really bugged. Tell me, am i wrong or i’m understanding wrong? Look, CTRL + F results:

docker network connect flappy-private flappy
docker network connect flappy-private network-gate
docker network connect flappy-private ssl-term

Please, help me.

yggdrasil
8 years ago

Very nice article with clear examples. Could you perhaps add a final paragraph that shows how this would look in a Docker Compose file, since that step is almost required when running Docker in production? Thanks!

Show more replies