Deploying the Kubernetes Cluster
To deploy a Kubernetes cluster, configure the management and compute nodes, and add the Flannel network plugin on the management node.
Configuring the Management Node
If you need to set up a new Kubernetes cluster, clear the existing Kubernetes cluster settings. For details, see Uninstalling Kubernetes.
- Optional: Check whether a proxy is configured. If a proxy has been configured, delete it to prevent kubeadm init initialization timeout.
- Check whether a proxy is configured.
env|grep -E "http_proxy|https_proxy|no_proxy"
If a command output is displayed, a proxy is configured.
- Delete the proxy.
export -n http_proxy export -n https_proxy export -n no_proxy
- Check whether a proxy is configured.
- For openEuler, create the resolv.conf file manually (for CentOS, skip this step). If the resolv.conf file is missing, an error will be reported when initializing the cluster. Run the following command to create a resolv.conf file:
touch /etc/resolv.conf
- Run the cluster initialization command on the management node.
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version v1.23.1
- v1.23.1 indicates the Kubernetes version. Replace it with the actual one.
- You do not need to run the cluster initialization command on the compute nodes.
- If the following information is displayed when initializing the management node:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
You can run the following command to set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 (this file cannot be modified by running the vim command):
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
- The --pod-network-cidr option specifies the IP address segment that can be used by the Kubernetes network. The Flannel network plugin will use the fixed IP address segment 10.244.0.0/16.
- The --control-plane-endpoint option specifies a stable IP address or DNS name for the control plane. This parameter is supported in kubeadm 1.15 and later. You can add this parameter to the initialization command if needed.
If the information shown in Figure 1 is displayed, the management node is successfully initialized.
As shown in Figure 1, the information in the yellow box indicates the commands used to configure the cluster on the management node, and the information in the red box indicates the token command for adding the compute node to the cluster. Save the command.
- Configure the cluster.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=/etc/kubernetes/admin.conf
- View the cluster nodes on the management node.
kubectl get nodes
- Save the cluster joining information generated by the management node.
The information is generated and printed after the management node is successfully initialized. The cluster information is as follows:
kubeadm join 192.168.1.11:6443 --token a9020j.vnfgqk7n30p5d9z0 --discovery-token-ca-cert-hash sha256: c465651177b41c545fe20f8dc052b9661a8375afdeac7e7ecf52029fc66a506a
You can use the token to add a compute node to the cluster within 24 hours.
- The token for adding a compute node to the cluster is randomly generated. You need to use the corresponding command when creating a cluster.
- The default validity period of a token is 24 hours. If the token times out, you can run the following command on the management node in the Kubernetes cluster to generate a new token.
kubeadm token create --print-join-command
Configuring a Compute Node
- Optional: If HTTP and HTTPS proxies have been configured on the Kubernetes worker node, delete the proxies.
export -n http_proxy export -n https_proxy export -n no_proxy
- Add a compute node to the cluster.
- Run the following command on the compute node to add the node to the cluster:
kubeadm join 192.168.1.11:6443 --token a9020j.vnfgqk7n30p5d9z0 --discovery-token-ca-cert-hash sha256: c465651177b41c545fe20f8dc052b9661a8375afdeac7e7ecf52029fc66a506a
You can use the token to add a compute node to the cluster within 24 hours.
- The token for adding a compute node to the cluster is randomly generated. You need to use the corresponding command when creating a cluster.
- The default validity period of a token is 24 hours. If the token times out, you can run the following command on the management node in the Kubernetes cluster to generate a new token.
kubeadm token create --print-join-command
- Wait one minute. On the management node, run the following command to check the new compute node:
kubectl get nodes
An example of the expected result:
NAME STATUS ROLES AGE VERSION master NotReady master 12h v1.23.1 compute01 NotReady <none> 11h v1.23.1 compute02 NotReady <none> 11h v1.23.1
- Run the following command on the compute node to add the node to the cluster:
- Check the kubelet service status on the management and compute nodes.
systemctl status kubelet
An example of the expected result:

Adding the Flannel Network Plugin
Add the Flannel network plugin to the management node to resolve the network communication issue between pods on each host node.
- Download the Flannel network plugin configuration file.
wget --no-check-certificate https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- Modify the kube-flannel.yml file to configure resources.
- Open the file.
vim kube-flannel.yml
- Press i to enter the insert mode. Under resources, modify the resources used by the Flannel network plugin. Set the parameters based on your requirements.
resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "200m" memory: "100Mi"
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Install the Flannel plugin.
kubectl apply -f kube-flannel.yml
- Check the node status on the management node.
kubectl get nodes
The node status changes to Ready. An example of the expected result:
NAME STATUS ROLES AGE VERSION master Ready master 12h v1.23.1 compute01 Ready <none> 11h v1.23.1 compute02 Ready <none> 11h v1.23.1
- Check the pod status on the management node.
kubectl get pod -A
If READY is 1/1 in the command output, the pod is running properly.
Figure 2 pod status
