Establishing Direct Communication Between Docker Containers on Two Different Hosts Using VXLAN Overlay Network

DN S Dhrubo
7 min readJul 20, 2023

--

Assume we have two different Virtual Machines in the same network (10.10.0.0/16). On each VM, a docker container is running. The two containers are not in the same docker network as they reside in different VM. So direct communication between them is not possible. Now we have to do something so that both of the containers can communicate and be able to send packets from one container to another without enabling any NAT gateway and IP routing.

To do so we need a little bit of knowledge of underlay and overlay network. Let’s talk about that in this context.

Underlay Network:

https://community.fs.com/blog/a-closer-look-at-overlay-and-underlay-network.html

The term “underlay” in networking refers to the physical network infrastructure that serves as the foundation for higher-level networking abstractions (overlay).

Key characteristics of underlay network:

  1. Physical Infrastructure: The underlay network represents the physical network infrastructure that connects the different hosts (VMs) together. It includes routers, switches, cables, and other networking hardware responsible for actual data transmission.
  2. Basic Connectivity: The underlay network provides the basic connectivity between the hosts (VMs) and is responsible for routing packets from one host to another based on IP addresses.
  3. Low-Level Abstraction: The underlay network operates at the lowest level of networking, dealing with raw data transmission and routing. It does not have any knowledge of higher-level constructs like virtual networks or containers.
  4. Higher-Level Abstractions: On top of the underlay network, higher-level networking abstractions are built, such as Docker networks, VLANs, or overlay networks. These abstractions provide additional functionality and manage the communication between containers or virtual machines.

In the context of Docker networking, the underlay network refers to the network infrastructure that connects the hosts running Docker containers. Docker’s networking subsystem operates on top of the underlay network and provides higher-level networking features such as container-to-container communication, Docker bridge networks, overlay networks (when using Docker Swarm), and network plugins.

The underlay network allows different VMs to communicate with each other, and Docker’s networking features enable containers within these VMs to communicate with each other as well. The underlay network is essential for facilitating the basic connectivity between VMs and providing the foundation for container networking within Docker.

Overlay Network:

https://community.fs.com/blog/a-closer-look-at-overlay-and-underlay-network.html

An overlay network is a higher-level networking abstraction built on top of the underlay network. It allows containers running on different hosts (VMs) to communicate with each other seamlessly, even if they are located on different physical machines and are part of different subnets. Overlay networks are particularly useful when deploying containerized applications in distributed and multi-host environments.

Key characteristics of overlay networks:

  1. Cross-Host Communication: Overlay networks enable containers on different hosts (VMs) to communicate as if they were on the same local network. This allows you to build distributed applications and microservices that span multiple hosts while maintaining connectivity between containers.
  2. Virtual Network Abstraction: An overlay network provides a virtual network abstraction that spans across all participating hosts in the Docker Swarm (in the case of Docker Swarm mode) or a multi-host environment. It abstracts away the complexities of the underlay network and allows containers to communicate using their virtual IP addresses.
  3. Encapsulation and Tunneling: Communication between containers in different VMs is achieved using encapsulation and tunneling techniques. When a container sends a packet to another container in a different VM, the packet is encapsulated with additional headers that contain information about the destination container and the overlay network. The packet is then sent through the underlay network, and on the receiving side, the encapsulation is removed, and the original packet is delivered to the destination container.
  4. Scalability and Load Balancing: Overlay networks are designed to scale efficiently and distribute network traffic across multiple hosts. They support load balancing and traffic distribution mechanisms to ensure optimal performance and availability.

In Docker, overlay networks are primarily used with Docker Swarm, the built-in container orchestration solution for Docker. When you create an overlay network in Docker Swarm mode, it becomes available to all nodes in the Swarm, and containers can be connected to this network regardless of the host they are running on.

The combination of Docker Swarm and overlay networks provides a powerful solution for deploying and managing containerized applications across a cluster of VMs, ensuring seamless communication between containers, even when they are distributed across different hosts and subnets. This allows for the creation of highly available, scalable, and distributed applications.

Let’s familiar with VXLAN, VNI, and VTEP

These are the terms related to network virtualization and overlay networks. They are commonly used in the context of software-defined networking (SDN) and container orchestration systems like Docker and Kubernetes.

https://www.analysisman.com/2018/05/extreme-switch-how-to-configure-vxlan.html
  1. VXLAN (Virtual Extensible LAN): VXLAN is a network virtualization technology that enables the creation of virtual Layer 2 networks over a Layer 3 (IP) network infrastructure. It is used to overcome the limitations of traditional VLANs, which have a limited number of available IDs (4096 VLAN IDs) and cannot be easily extended across data centers and clouds. VXLAN encapsulates Layer 2 Ethernet frames within UDP packets, allowing the frames to traverse Layer 3 networks. This allows for the creation of larger Layer 2 domains that can span across multiple physical networks, making it more scalable and flexible for modern data center environments.
  2. VNI (VXLAN Network Identifier): VNI is a 24-bit identifier used in VXLAN to uniquely identify each virtual network. Each VXLAN segment or virtual network is assigned a unique VNI, allowing the Layer 2 traffic within that segment to be isolated from traffic in other segments. This segregation is crucial for multi-tenancy and security purposes in virtualized environments. When a packet is sent across a VXLAN network, the VNI is used to identify the destination virtual network, ensuring that the packet is delivered to the correct endpoint.
  3. VTEP (VXLAN Tunnel Endpoint): VTEP is the network element responsible for encapsulating and decapsulating Ethernet frames into and from VXLAN packets. It acts as a gateway between the physical network and the VXLAN overlay network. Each VTEP has at least one interface in the physical network and one interface in the VXLAN overlay network. When a host or virtual machine wants to communicate with another host in a different VXLAN segment, its VTEP encapsulates the Ethernet frame into a VXLAN packet and sends it to the remote VTEP. The remote VTEP then decapsulates the packet and delivers it to the destination host within its respective VXLAN segment.

In summary, VXLAN is a network virtualization technology that enables the creation of large-scale overlay networks. Each virtual network is identified by a VNI, and communication between these virtual networks is facilitated by VTEPs. This technology is widely used in modern data centers and cloud environments to provide flexible and scalable network solutions.

Now we are going to create an overlay network using VXLAN that creates a tunnel on top of the underlay network:

Our system is in the same subnet (10.10.0.0/16) as like below

  1. VM1 (10.10.0.11) -> Docker — (Network 172.18.0.0/24) -> Container (ubuntu-172.18.0.11)
  2. VM2 (10.10.0.22) -> Docker (Network -172.18.0.0/24) -> Container (ubuntu-172.18.0.22)

The goal is to ping from one container to another without enabling any NAT gateway and IP routing.

Solution steps:

  1. In VM1,

Install docker
Create docker network with a specific subnet (172.18.0.0/24)
Run a container along with the created network and select an ip (172.18.0.11) from the subnet

2. In VM2,

Do the same as VM1 but keep the container ip different (172.18.0.22)

3. Again in VM1,

Add a vxlan with id and specify remote ip (here VM2 ip — 10.10.0.24) with UDP port
Up the vxlan interface.
Add docker bridge network interface with newly created vxlan.

4. In VM2,

Do the same as done in VM1 but specify the remote ip carefully (here VM1 ip — 10.10.0.23).

5. Now we are ready to check the ping. To ping form VM1 docker container to VM2 docker container:

Open VM1 container bash terminal
Update it and install necessary tools
Ping to another container ip (VM2 docker container ip)

You will find the commands below

Setup docker tools in both VM1 and VM2:

sudo apt update
sudo apt install -y docker.io
sudo docker network create - subnet 172.18.0.0/16 vxlan-net
sudo docker network ls

Run container in VM1

sudo docker run -d - net vxlan-net - ip 172.18.0.11 ubuntu sleep 3000
sudo docker ps
sudo docker inspect <process id> | grep IPAddress
# ping the docker bridge ip to see whether the traffic can pass
ping 172.18.0.1 -c 2

Run container in VM2 same as VM1

sudo docker run -d - net vxlan-net - ip 172.18.0.22 ubuntu sleep 3000
ping 172.18.0.1 -c 2

Setup VXLAN in VM1

brctl show
sudo ip link add vxlan-demo type vxlan id 100 remote 10.10.0.22 dstport 4789 dev eth0
ip a | grep vxlan
sudo ip link set vxlan-demo up
sudo brctl addif <br-id> vxlan-demo

Setup VXLAN in VM2

brctl show
sudo ip link add vxlan-demo type vxlan id 100 remote 10.10.0.22 dstport 4789 dev eth0
ip a | grep vxlan
sudo ip link set vxlan-demo up
sudo brctl addif <br-id> vxlan-demo

Now entering in the container in VM1 to ping to VM2

sudo docker exec -it <4f> bash
apt-get update
apt-get install net-tools iputils-ping
ping 172.18.0.22

--

--

No responses yet