Concepts
Kubernetes
Kubernetes is an open-source container orchestration engine provided by Google. It supports automatic deployment, large-scale scalability, and containerized application management. Kubernetes classifies network scenarios into the following types: communication between containers in the same pod, communication between pods, communication between pods and services, and communication between systems outside the cluster and services.
Pod and service resource objects use their own dedicated networks. The pod network is implemented through Kubernetes network plugin configuration (CNI model). The service network is specified by the Kubernetes cluster. The overall network model is implemented using external plugins. Therefore, you need to plan the network model and network deployment before deploying the Kubernetes cluster.
Container Network Model (CNM)
CNM is proposed by Docker and adopted by the Libnetwork project as the network model standard. It is used by common open-source network components such as Kuryr, OVN, Calico, and Weave for container network interconnection.
As shown in Figure 1, the CNM implementation Libnetwork provides interfaces between Docker daemons and network driver programs. The network controller is responsible for matching drivers with networks, and each driver is responsible for managing the network of the driver and provides services such as IPAM for the network. Drivers of the CNM can be native drivers (for built-in Libnetwork or network models supported by Docker) or third-party plugin drivers. The native drivers include None, Bridge, Overlay, and MACvlan. Third-party drivers provide more functions. In addition, the scope of a CNM driver can be defined as either a local scope (single-host mode) or a global scope (multi-host mode).
Containers are connected through a series of network endpoints, as shown in Figure 2. A typical network interface exists in the form of a veth pair. One end of the veth pair is in the network sandbox of the container, and the other end is in the specified network. One network endpoint is added to only one network plane. Multiple network endpoints can exist in the network sandbox of a container.
Container Network Interface (CNI)
CNI is proposed by CoreOS and adopted by Apache Mesos, Cloud Foundry, Kubernetes, Kurma, and rkt as the network model standard. CNI is used by common open-source network components such as Contiv Networking, Calico, and Weave for container network interconnection.
As shown in Figure 3, CNI is implemented based on a simplest standard, enabling network development engineers to implement protocol communication between containers and network plugins in a simple way.
Multiple network plugins can run in a container so that the container can connect to different network planes driven by different plugins. The network is defined in a JSON configuration file and is instantiated as a new namespace when CNI plugins are called.
Open vSwitch
Open vSwitch (OVS) is a multi-layer switch software based on the Apache 2.0 open source license. It aims to build a production environment quality switching platform that supports standard management interfaces, forward function interfaces, programmable plugins, and management control. OVN is a native virtualized network solution provided by OVS. It uses existing OVS functions to implement large-scale and high-quality cluster management.
Kube-OVN network topology
Figure 4 shows the switches, cluster routers, and firewalls of Kube-OVN. They are deployed on all nodes in the cluster in distributed mode, and there is no single point of failure in the network topology of the cluster.



