Network Namespaces and Container Networking
So far, we have explored how network namespaces work, including creating an isolated network namespace environment within our system. Here’s a summary of what we’ve covered:
Connecting Namespaces:
- Create multiple networking namespaces.
- Connect them through a bridge network.
- Create virtual cables or pipes with virtual interfaces on either end.
- Attach each end of the vEth to the namespace and the bridge.
- Assign IP addresses and bring them up
- enable NAT or IP masquerade for external communication.
Docker’s bridge networking option uses similar principles but different naming patterns. Other container solutions, like Rocket, Mesos Containerizer, and Kubernetes, implement networking similarly.
Standardizing Networking Solutions
Given that various container solutions tackle similar networking challenges, why develop multiple solutions? Why not create a single standard approach for everyone to follow? This is where the concept of a unified program for networking, called **BRIDGE **, comes in:
- Bridge Program:
- A program or script that performs all tasks to attach a container to a bridge network.
- Container runtime environments call the Bridge program to handle networking configuration.
For example, using the bridge program to add a namespace, you could do $ bridge add <cid> <namespace>
$ bridge add 9ba3541a137c /var/run/nens/9ba3541a137c
Developing a Standard Program
If you wanted to create a similar program for a new networking type, you would need to consider:
- Supported arguments and commands.
- Ensuring compatibility with container runtimes like Kubernetes or Rocket.
Introduction to Container Network Interface (CNI)
To address these challenges, CNI (Container Network Interface) provides a set of standards for developing networking solutions in container runtime environments. Here’s how it works:
CNI Requirements:
- Container Runtime must create a network namespace
- Indentify network the container must attach to
- Container Runtime to invoke Network PLugin (bridge) when container is ADDed.
- Container Runtime to invoke Network PLugin (bridge) when container is DELeted.
- JSON format of the Network Configuration
Plugin Requirements:
- Must support
add
,del
, andcheck
commands. - Must support parameters container id, network ns, etc.
- Handle IP assignment and routing for containers.
- Results should be in a specified format.
CNI Plugins
CNI includes several supported plugins, such as:
- Bridge, VLAN, IP VLAN, MAC VLAN
- IPAM plugins like Host Local and DHCP
- Third-party plugins like Weave, Flannel, Cilium, VMware NSX, Calico, Infoblox
These plugins work across any runtime that implements CNI standards, ensuring interoperability.
Docker and CNI
Docker, however, does not implement CNI. Instead, it has its own standard called CNM (Container Network Model). Due to differences between CNI and CNM, Docker cannot natively use CNI plugins. Nonetheless, you can still use Docker with CNI by manually invoking the bridge plugin, similar to how Kubernetes handles Docker containers.
For example, this will not work:
$ docker run --network=cni-bridge nginx
Instead, you would create a container using Docker and then use the bridge command:
$ docker run --network=none nginx
$ bridge add 9ba3541a137c /var/run/nens/9ba3541a137c
Conclusion
We will delve deeper into how CNI is used within Kubernetes in the upcoming posts.