Docker networking 101

David Cheong
16 min readDec 28, 2022

--

Docker has become a mainstream technology nowadays to host any application for several reasons, such as ease of use, flexibility, the rise of microservices architecture, etc. Networking for Docker is another big reason because you can freely connect any workload hosted with or without Docker.

The next question is which network driver should I use? Each driver offers tradeoffs and has different advantages depending on the use case. There are built-in network drivers included with Docker Engine and networking vendors, and the community also offers plug-in network drivers. The most commonly used built-in network drivers are bridge, overlay and macvlan. Together they cover a comprehensive list of networking use cases and environments.

What is a Docker network?

Networking is about communication among processes; the same goes for Docker networking. Docker networking is primarily used to establish communication between Docker and Docker, or Docker with the outside world via the host machine where the Docker daemon is running.

Different types of networks are available for Docker, each fitting for different purposes. We’ll be exploring the network drivers supported by Docker in general with some coding samples.

What are docker network drivers?

Docker network drivers are pluggable interfaces that provide the actual network implementations for Docker Containers. Docker comes with several drivers out-of-the-box that provide core networking functionality for many use cases.

Docker allows you to create three types of network drivers out-of-the-box: bridge, host, and none. However, they may not fit every use case, so we’ll also explore user-defined networks such as overlay and macvlan.

The following are the example of Docker Network Drivers that are available

  • Bridge Driver — Default
  • Bridge Driver — User Defined
  • Host
  • Macvlan Driver — Bridge Mode
  • IPvlan Driver — L2
  • Ian Driver — L3
  • Overlay
  • None

Bridge Driver — Default

Docker handles communication between containers by creating a default bridge network, so you often don’t have to deal with networking and can instead focus on creating and running containers. This default bridge network works in most cases, but it’s not the only option available.

Default bridge driver is the most common use and simple to understand, simple to troubleshoot driver, which makes it a good networking choice for developers and those new to Docker. Whenever you start using Docker, a bridge network gets created automatically, and all newly started containers will connect automatically to the default bridge network (If the network option is not specified).

The bridge driver creates a private network internally to the host so containers on this network can communicate. External access is granted by exposing ports to containers. Docker secures the network by managing rules blocking connectivity between different networks.

Behind the scenes, the Docker Engine creates the necessary Linux bridges, internal interfaces, IP tables rules, and host routes to make this connectivity possible. In the example highlighted below, a Docker bridge network is created by default, and two containers are attached to it. With no extra configuration, the Docker Engine does the necessary wiring and configures security rules to prevent communication with other networks. A built-in IPAM driver provides the container interfaces with private IP addresses from the subnet of the bridge network.

Once you have done the docker installation and checked on the host network interface with ip address show command, you will see that there is a new interface created with the name of docker0, this is the default bridge driver that is automatically created.

docker0 network interface will be created automatically when you install the docker engine
docker0 network interface will be created automatically when you install the docker engine

To check the available network by running the docker network ls command

$ docker network ls bridge
NETWORK ID     NAME      DRIVER    SCOPE
0b3ecfd43c69 bridge bridge local

Start two busybox containers named puppy and kitty in detached mode by passing -itd flag

Start two busybox containers named puppy and kitty in detached mode by passing -itd flag

$ docker run -itd --name puppy busybox
$ docker run -itd --name kitty busybox

Run the docker ps command to verify that containers are up and running

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
21ad60f86537 busybox "sh" 14 seconds ago Up 13 seconds kitty
c819e44f2b12 busybox "sh" 46 seconds ago Up 44 seconds puppy

Verify that the containers are attached to the default bridge network using docker network inspect bridge command

$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "0b3ecfd43c697742f7b5fb7faf89f1f4d463dfec1160258b0a309143344f8c24",
"Created": "2022-12-12T17:25:04.157451621+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"21ad60f86537966fad5880c9104ba7bf6064a2c343b3b534db020eaa80b2ba97": {
"Name": "kitty",
"EndpointID": "36bc24d5105cb27199997425cace3478d2c4fa0bc2cacc516432fe088cfde7c4",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"c819e44f2b124eb1453603c9edbbca9d2ef7e4a3133f49a8eb9bcbcaddac50ce": {
"Name": "puppy",
"EndpointID": "00efd32aab90e6361d1c3be093e783076213e21b623dc07496d6453eb2436a5f",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

Under the container’s key, you can observe that two containers (puppy and kitty) are listed with information about IP addresses. When you check on the docker host network interface with the command ip address show, you will see two more interfaces created and linked to the docker0.

$ bridge link
7: veth19c821a@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2
11: veth5ffa562@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2

Since containers are running in the background, attach to the puppy container and try to ping the kitty with its IP address.

$ docker attach puppy
/ # whoami
root

/ # hostname -i
172.17.0.2
/ # ping 172.17.0.3

PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=2.455 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.210 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.249 ms
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.210/0.971/2.455 ms

/ # ping puppy
ping: bad address 'puppy'

/ # ping kitty
ping: bad address 'kitty'

Observe that the ping works by passing the IP address of kitty but fails when the container name is passed instead.

The downside with the default bridge driver is that the containers only can communicate via IP address instead of automatically discovering the service and resolving the container name to IP address. Every time you run a container, a different IP address gets assigned to it. It may work well for local development or CI/CD, but it’s definitely not a sustainable approach for applications running in production.

Another reason not to use it in production is that it will allow unrelated containers to communicate with each other, which could be a security risk.

Bridge Driver — User Defined

For the bridge driver — user-defined, it’s the same as the default driver, but the user creates it. If you are using docker-compose, without specifying the network, the docker engine will still build it automatically and connect the container to the user-defined bridge driver.

Bridge Driver — User Defined

To create the bridge driver just use the following command

$ docker network create myNetwork

To check the available network by running the docker network ls command, you should be able to see a new network available with the name of myNetwork and bridge driver, this is the newly created user-defined bridge driver.

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
0b3ecfd43c69 bridge bridge local
a1ab45f786aa host host local
e69eada92671 myNetwork bridge local
f6c32eebed06 none null local

Start two containers and connect to the newly created network

$ docker run -itd --rm --network myNetwork --name spiderman busybox 
$ docker run -itd --rm --network myNetwork --name superman busybox

Run the docker ps command to verify that containers are up and running

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b43578dd0d5a busybox "sh" 5 minutes ago Up 5 minutes superman
77d1773c0d10 busybox "sh" 5 minutes ago Up 5 minutes spiderman

Verify that the containers are attached to the default bridge network using docker network inspect myNetwork command

$ docker network inspect myNetwork
[
{
"Name": "myNetwork",
"Id": "e69eada9267131d781737ea3dae280f1d85064cf675174f59afa742ac4f0764e",
"Created": "2022-12-12T21:57:40.327340625+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"77d1773c0d10efd589320556daf84029733b984894647904f06e13e8a1c6bcf9": {
"Name": "spiderman",
"EndpointID": "9392c0c6c0d4ae90d9868c059d2b2180885993374bf5442e76205f0fb9ada246",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"b43578dd0d5a8683482d96d4786743628d0eba1e36b73fcb42aa167539d3008d": {
"Name": "superman",
"EndpointID": "d6716548b2886336b86449b9f83acfb660ef7ae05be90e23e4bbef69b0239515",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Under the container’s key, you can observe that two containers (superman and spiderman) are listed with information about IP addresses. Since containers are running in the background, attach to the superman container and try to ping spiderman with its IP address and container name.

$ docker attach superman
/ # whoami
root
/ # hostname -i
172.18.0.3
/ # ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.904 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.322 ms
--- 172.18.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.322/0.613/0.904 ms
/ # ping spiderman
PING spiderman (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.085 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.115 ms
--- spiderman ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.085/0.100/0.115 ms
/ # ping superman
PING superman (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=1.304 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.081 ms
--- superman ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss

Observe that the ping works by passing the IP address and container names spiderman and superman.

There are several advantages of using the user-defined bridge driver, one of which is container isolation because only the container within the same bridge network can communicate freely without needing to expose the port to the host machine. Another advantage of a user-defined bridge driver is that the container can communicate with each other by using the container name and IP address so that you can specify either the container name or IPs address of the other container.

Host Driver

Host driver means that the container will be created and attached directly to the docker host interface, so that is no network isolation from the host or other container running in the same machine as it shares the same interface. That means the container does not get its IP address allocated. For instance, if you run a container that binds to port 80 and uses host networking, the container’s application is available on port 80 on the host’s IP address. Given that the container does not have its IP address when using the host driver, port mapping does not take effect, and the -p, –publish, -P, and –publish-all options are ignored, producing a warning instead.

Host driver can be useful to optimise performance, and in situations where a container needs to handle a large range of ports, as it does not require network address translation (NAT), and no “userland-proxy” is created for each port. The host driver only works on Linux hosts and is not supported on Docker Desktop on Mac, Docker Desktop on Windows, or Docker EE for Windows Server.

Host Driver

To test the containers with the host driver, pass in the option –network host, without any port mapping options.

$ docker run -itd --network host --name web nginx

Inspect the host network, and you will see that there is a newly created container attached to it.

$ docker network inspect host
[
{
"Name": "host",
"Id": "a1ab45f786aa56bba6807661a05d836ae9818e69a020792414b6904482e060ea",
"Created": "2022-12-12T15:21:30.345152177+08:00",
"Scope": "local",
"Driver": "host",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": []
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ad30d96013ba91e371550f6cf915c8de6258edc6135fa9dfedadbaa43204325b": {
"Name": "web",
"EndpointID": "75c17c7fbf316cba61d4f87eca1401714c5b9084790c442d00d67c454854f414",
"MacAddress": "",
"IPv4Address": "",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

Next, we can verify the container is running and configuring correctly by access to the Nginx site with curl localhost command on your host machine, you will see the Nginx default page, which means the container is listening on the docker host on port 80.

$ curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Macvlan Driver — Bridge Mode

Some applications, especially legacy applications, need to be able to connect to the physical network directly. In this situation, macvlan is the right option to host the application. When you use the macvlan driver, it actually assigns a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network. In this case, you need to designate a physical interface on your Docker host to use for the macvlan and subnet and gateway of the macvlan.

Macvlan Driver — Bridge Mode

To create a macvlan network that bridges with a given physical network interface, use –driver macvlan with the docker network create command. You also need to specify the parent, which is the interface the traffic will physically go through on the Docker host. To get the Docker host physical interface id, you can use ip address show.

$ docker network create -d macvlan \
> --subnet=192.168.0.0/24 \
> --gateway=192.168.0.1 \
> -o parent=enp0s3 \
> myNewMacvlan

To check the available network by running the docker network ls command, you should be able to see a new network available, myNewMacvlan and Driver is macvlan.

$ docker network ls myNewMacvlan
NETWORK ID NAME DRIVER SCOPE
6f2d4461feea myNewMacvlan macvlan local

To start a new Nginx container using the macvlan driver, just specify the network name and IP address of your physical network (make sure the IP is not allocated to any resources on your network)

$ docker run -itd --rm --network myNewMacvlan --ip 192.168.0.123 --name web nginx

Once the container is up, try to access the site with any other laptop in the same network, and it should work.

It looks cool if you wish to host your container directly on your physical network because the container will get its IP on your physical network, and it looks like it’s physically attached to your network, and no need to expose any port from the container to the Docker host in order to be accessible.

But there is something you need to mind about the macvlan where it’s not utilised the DHCP server on your physical network to obtain the IP, but Docker will distribute the IP to the container with its DHCP server, which may conflict with other IPs on the same network. To avoid this, you can specify the IP address when attaching the container to the macvlan network or specify the IP range with the option –ip-range=192.168.0.128/25 when you create the macvlan network, so that Docker will only distribute the range which does not conflict with your existing network.

Another thing to note is that for the macvlan driver to work, the Docker host needs to enable the promiscuous mode, as I’m using the Virtual box and Ubuntu as my test lab, so I need to enable both manually.

Use the following command to enable the promiscuous mode in the Ubuntu guest machine.

$ sudo ip link set enp0s3 promisc on

From the Virtual Box, select the guest machine, click on Settings -> Network -> Advanced, and make sure the Promiscuous Mode is set to Allow All.

IPvlan Driver — L2

The IPvlan L2 work the same as Macvlan bridge mode except that there is no unique MAC address assigned for each container virtual interface, so if you don’t want to make any change on the promiscuous mode, this is the best alternative option.

IPvlan Driver — L2

To create the IPvlan L2 network with the docker network create command and specify the driver as ipvlan

$ docker network create -d ipvlan \
--subnet=192.168.0.0/24 \
--gateway=192.168.0.1 \
-o ipvlan_mode=l2 \
-o parent=enp0s3 ipvlanl2

Double-check the created network

$ docker network ls ipvlanl2
NETWORK ID NAME DRIVER SCOPE
6e1c19a92bc1 ipvlanl2 ipvlan local

To test the network driver, just launch a new Nginx container by specifying the network name and IP address.

$ docker run -itd --rm --network ipvlanl2 --ip 192.168.0.199 --name web nginx

Try to access the Nginx from another machine, you will see the default Nginx page.

IPvlan Driver — L3

IPvlan driver L3 is running at the IP layer and everything it’s just IP without the broadcasting or MAC address. In the L3 mode, the Docker host is similar to a router starting a new network in the container. They are on a network that the upstream network will not know about without route distribution.

If you wish to allow other machines on the same physical network to reach the container in IPvlan L3 network, you need to add a static route on your gateway, so that when someone tries to reach the IP, the gateway will know where it should forward the traffic to.

But if you create two different IP ranges in the same IPvlan network and both of them actually share the same parent interface which acts as a router for the network, they are able to communicate with each other.

Following the command to create the IPvlan L3 network, create a couple of containers and attach them to the network and try to ping each other.

# Create the IPvlan L3 network
docker network create -d ipvlan \
--subnet=192.168.214.0/24 \
--subnet=10.1.214.0/24 \
-o ipvlan_mode=l3 ipnet210
# Test 192.168.214.0/24 connectivity
docker run --net=ipnet210 --ip=192.168.214.10 -itd alpine /bin/sh
docker run --net=ipnet210 --ip=10.1.214.10 -itd alpine /bin/sh
# Test L3 connectivity from 10.1.214.0/24 to 192.168.212.0/24
docker run --net=ipnet210 --ip=192.168.214.9 -it --rm alpine ping -c 2 10.1.214.10
# Test L3 connectivity from 192.168.212.0/24 to 10.1.214.0/24
docker run --net=ipnet210 --ip=10.1.214.9 -it --rm alpine ping -c 2 192.168.214.10

Overlay Driver

The overlay driver is a swarm scope driver that creates a distributed network among multiple Docker hosts. This network sits on top of the host-specific networks, and it allows containers connected to it to communicate securely when encryption is enabled. Docker transparently handles each packet’s routing to and from the correct Docker daemon host and the destination container. The Overlay driver radically simplifies many of the complexities in multi-host networking as it comes with a lot of features without external provisionings such as IPAM, service discovery, multi-host connectivity, encryption, and load balancing.

Overlay Driver

To create the overlay network you need first to initialize the Docker swarm and create the network with docker network create command and specify the driver as overlay.

$ docker network create -d overlay myoverlay

To deploy a container in Docker swarm, the command is docker service create and specify the network driver name.

$ docker service create --network myoverlay \
-p 80:80 \
--network myoverlay \
--name web \
--replicas 2 \
nginx
98ek6z0fghg5uff5nuwuch2wx
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged

Verify that the containers are attached to the overlay network using docker network inspect myoverlay command. The swarm manager will create an additional container with the prefix lb-* as the load balancer to distribute requests among services within the cluster.

$ docker network inspect myoverlay
[
{
"Name": "myoverlay",
"Id": "m1l5jhkmzdwdphd0t52f6thor",
"Created": "2022-12-15T09:42:19.699097165+08:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.1.0/24",
"Gateway": "10.0.1.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2cb9b36572377a1d9e89f5c899493bad3638660dbe48beb006e27708d8272b9d": {
"Name": "web.1.h1j3ybyox9wa5yavtu94t5kvs",
"EndpointID": "79a81723fce9f71c5bf99490125797c86af7c5885c634ca0bd7f0e4106b726c1",
"MacAddress": "02:42:0a:00:01:15",
"IPv4Address": "10.0.1.21/24",
"IPv6Address": ""
},
"beafb0f5f4e2f22392518d1f5e51b0b5ba3316caacf42aa693f203f685f62e83": {
"Name": "web.2.kpsguuxxfgm963cg40t7fjyru",
"EndpointID": "f3766f26fa6acbf8297f16aa4d448879cec67cc3dc7226ed5d0dcfff8d6794bf",
"MacAddress": "02:42:0a:00:01:17",
"IPv4Address": "10.0.1.23/24",
"IPv6Address": ""
},
"lb-myoverlay": {
"Name": "myoverlay-endpoint",
"EndpointID": "097b7a1da4fde4bf3b1e5761cb5d1f22cb973ca0a66c381d800c4638a01c89be",
"MacAddress": "02:42:0a:00:01:18",
"IPv4Address": "10.0.1.24/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": {},
"Peers": [
{
"Name": "840e527f3ea6",
"IP": "192.168.0.215"
}
]
}
]

You can try to access the container using the browser, and browse the Swarm manager IP/Docker host IP. The Nginx default page will be shown.

None Driver

The none driver disables networking for a container, isolating it from other containers. Within the container, only the loopback interface is created to enable inter-process communication, using the IP address 127.0.0.1 mapping to the localhost name.

Start a busybox container name bird in detached mode and pass the network option as none to indicate the container is not connected to any network

$ docker run -itd --rm --network none --name bird busybox

Attach to the bird containers and check the container network; you can see that it only has one loopback interface available.

$ docker attach bird
/ # ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever

Without other network interfaces like eth0 means, neither can it reach external networks, nor can external networks reach it.

This makes the none network driver secure. But it also limits its usability, as we usually wish to enable communication between containers.

When do we need to use the none network driver?

One possible used case for the none network driver includes running network-isolated applications that only perform file operations. Mounting volumes could do this to the container, which could perform operations after certain intervals or detect file change events. Some examples include using a container for generating database backups, processing log files, etc.

Another possible used case is to run one-off command which requires network isolation. An example is performing computations and printing logs in a secure, network-isolated environment, including running ci/cd jobs that export produced artifact files or testing suspicious scripts that could contain malware, etc.

Reference: https://tech.david-cheong.com/docker-networking/

--

--