Rate This Document
Findability
Accuracy
Completeness
Readability

Architecture

The Kubernetes+Docker solution applies to container cloud scenarios.

Linux Containers (LXC) is an OS-level virtualization method for running multiple isolated containers on a control host using a single Linux kernel. The cgroups functionality and namespace isolation functionality are used to achieve low host resource usage and high startup speed. Docker is a Linux container engine technology that enables application packaging and quick deployment. Docker uses Linux Containers to turn applications into standardized, portable, and self-managed components, enabling the "Build Once, Run Everywhere" features of applications. Features of Docker technology include: quick application release, easy application deployment and capacity expansion, high application density, and easy application management.

Kubernetes groups Docker container host machines into a cluster for unified resource scheduling, automatic container life cycle management, and cross-node service discovery and load balancing. It provides better support for the microservice concept and division of the boundaries between services, such as the introduction of concepts of label and pod.

A Kubernetes cluster consists of at least one cluster master and multiple nodes. It features lightweight plugin-based architecture, easy migration, quick deployment, and scalability. Figure 1 shows the architecture.

Figure 1 Kubernetes and Docker container architecture
Table 1 Nodes in the Kubernetes and Docker container scenario

Name

Description

Pod

Pods are the smallest deployable units of containers that can be created, scheduled, and managed in Kubernetes. A pod is a collection of containers rather than a single application container. A pod is a group of one or more containers. Pods are always co-located and co-scheduled, and run in a shared context. Containers within the same pod share the same network namespace, IP address, port space, and volume. Pods are short-lived applications. Pods remain on the nodes where they are scheduled until being deleted.

API server

Exposes Kubernetes APIs. No matter whether the kubectl or API is invoked to operate various resources of the Kubernetes cluster, the operations are performed through the interfaces provided by the kube-apiserver.

kube-controller-manager

Manages the entire Kubernetes and ensures that all resources in the cluster are in the expected state. When the status of a resource in the cluster is abnormal, the controller manager triggers the corresponding scheduling operation. The controller manager consists of the following parts:

  • Node controller
  • Replication controller
  • Endpoints controller
  • Namespace controller
  • Service accounts controller

Scheduler

Schedules the Kubernetes cluster, receives scheduling operation requests triggered by the kube-controller-manager, performs scheduling calculation based on the request specifications, scheduling constraints, and overall resource status, and sends scheduling tasks to the kubelet component of the target node.

etcd

etcd is an efficient KV storage system used to share configurations and discover services. It features distribution and strong consistency. It is used for storing all data that needs to be persisted in Kubernetes.

kubelet

kubelet is the most important core component on a node. It is responsible for computing tasks of the Kubernetes cluster and performs the following functions:

  • Monitors task assignment of kube-scheduler.
  • Mounts volumes for pod containers.
  • Downloads secrets for pod containers.
  • Runs the Docker container by interacting with the Docker daemon.
  • Periodically performs container health check.
  • Monitors and reports pod status to kube-controller-manager.
  • Monitors and reports node status to kube-controller-manager.

kube-proxy

Forwards service requests from services to pod instances and manages load balancing rules.