Introduction
Overview
- Kubernetes
Kubernetes (K8s) is an open-source system that automatically deploys, extends, and manages containerized applications. It aims to provide a platform for automatic deployment, expansion, and running of application containers across a host cluster. It supports a series of container tools, including Docker. Kubernetes defines a series of building blocks in the design structure to provide a mechanism for jointly deploying, maintaining, and extending applications. The components that make up the Kubernetes are designed to be loosely coupled and scalable, allowing it to accommodate a variety of different workloads. Extensibility is largely provided by the Kubernetes API, which is used primarily as an extended internal component and as a container running on Kubernetes.
- Docker
Docker is an open-source application container engine. If the delivery and operating environment is compared to sea transportation, the OS is like a freighter and each OS-based software is like a container, where users can flexibly assemble the operating environment by using standard methods. The content of the container can be customized by users or made by professionals. In this way, a piece of software can be built by a series of standardized components, like Lego building blocks. Users only need to select a proper building block combination and name it on the top (the last standardized component is the user's app). This is the prototype of the Docker-based PaaS products.
Developers can package their applications and dependencies into a portable container and then publish them to any popular Linux machine. Containers use the sandbox mechanism, which eliminates interface between containers.
- Ceph
Ceph is a reliable, auto-rebalancing, and auto-recovery scale-out storage system. Based on application scenarios, Ceph can be divided into object storage, block device and file system services. Ceph provides unified scale-out storage with self-healing and intelligent fault prediction functions. It has become one of the most widely accepted standards of software-defined storage. Ceph is open-source, which allows many vendors to provide Ceph-based software for defining appropriate storage systems.
- Hybrid deployment
Hybrid deployment is a functional requirement for enabling the Kunpeng chip ecosystem. The hybrid deployment of Kubernetes and Docker on x86 and Kunpeng servers needs to be implemented. Functions that support hybrid deployment include container image creation, container network management, and container storage management.
Based on the actual usage mode, there are two scenarios:
- Capacity Expansion Scenario
The customer has a Kubernetes + Docker cluster based on x86 servers. The customer requires that the newly purchased Kunpeng servers be added to the existing cluster and the Kubernetes master node (x86 server) be used to provision containers, manage networks, and manage storage for Kunpeng nodes.
- Kubernetes Cluster Creation Scenario
The customer builds a new Kubernetes cluster based on the existing x86 and Kunpeng servers and selects a Kunpeng server as the master node of the Kubernetes cluster. The Kubernetes master can manage Kunpeng and x86 nodes and perform container provisioning, network management, and storage management for the nodes.
Storage types in storage management include Volume and PersistentVolume. Due to the popularity of Ceph, storage management uses Ceph storage outside the Kubernetes cluster for PersistentVolume verification.
This document describes how to install and deploy the Kubernetes + Docker hybrid deployment environment on the x86 and Kunpeng servers.
Recommended Versions
The table below lists the recommended software versions.
Software |
Version |
|---|---|
K8s |
v1.15.1 |
Docker |
18.09 |
Ceph |
14.2.1 |
Constraints
Due to the architecture compatibility problem of service container images (that is, not all container images support the Arm64 architecture and x86 architecture at the same time), the server architecture needs to be specified when Kubernetes provisions pods. In this way, potential risks caused by pod drifting to servers of other architectures can be avoided when a server is faulty.
As shown in the example below, the existing nodeSelector mechanism of Kubernetes can be used to specify the server architecture.
apiVersion: v1
kind: ReplicationController
metadata:
name: webapp
spec:
replicas: 2
template:
metadata:
name: webapp
labels:
app: webapp
annotations:
cni: "flannel"
spec:
containers:
- name: webapp
image: tomcat
ports:
- containerPort: 8080
nodeSelector:
kubernetes.io/arch: arm64