Using Kunpeng TAP
Kunpeng TAP allows you to specify CPU resource requirements during Pod deployment. The system automatically allocates resources based on NUMA affinity. By compiling a YAML file and specifying the node selector, you can deploy a Pod on a specific node. After the plugin is deployed, you only need to specify the values of request and limit for CPU resources when deploying other Pods. The system automatically allocates resources based on NUMA affinity.
The following is an example YAML file for deploying a single-container Pod. The CPU resources requested by the Pod are 4 cores at minimum, and 8 cores at maximum. The memory is fixed to 4 GiB. busybox is used as the container image.
- Create a YAML file example.yaml, and write the following configuration into the file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
apiVersion: v1 kind: Pod metadata: name: tap-test annotations: spec: containers: - name: tap-example image: busybox:latest imagePullPolicy: IfNotPresent command: ["/bin/sh"] args: ["-c", "while true; do echo `date`; sleep 5; done"] resources: requests: cpu: "4" memory: "4Gi" limits: cpu: "8" memory: "4Gi"
- For example, to make the Pod run on the compute01 node, add the following content to the spec section in the YAML file.
In a Kubernetes cluster with multiple worker nodes, a Pod can be scheduled to different NUMA nodes. To make the Pod run on a specified node, add the nodeSelector field to the spec section in the YAML file and set kubernetes.io/hostname to the name of the target node.
1 2
nodeSelector: kubernetes.io/hostname: compute01
- Apply the YAML file on the management node to deploy the Pod.
kubectl apply -f example.yaml
- Check whether Kunpeng TAP takes effect.
- Using Docker as an example, access the compute01 node specified by nodeSelector in step 2, run the docker command to query the CpusetCpus parameter of the container, and determine whether NUMA affinity has been established.
- Use docker ps to query container tasks running on the cluster nodes. In the NAMES column, find the nri-1 container specified by spec.containers.name in step 1.
# docker ps | grep nri-1 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- Query the deployment parameter CpusetCpus of the target container based on its container ID. This parameter indicates the range of CPU cores that can be scheduled by the container.
# docker inspect bf32de0d09fe | grep "CpusetCpus" "CpusetCpus": "0-23",If memory binding is enabled, you can run the following command to check the corresponding node.
# docker inspect bf32de0d09fe | grep "CpusetMems" "CpusetMems": "0",
The bound NUMA nodes are not fixed on different servers, and the values of CpusetCpus may be different.
In the containerd scenario, you can run the following command to check the schedulable CPU range of a container:# crictl inspect bf32de0d09fe | grep "cpuset_cpus" "cpuset_cpus": "0-23",If NUMA node affinity configuration fails, the cpuset_cpus output may fail to be queried.
- Query the NUMA information of the system and compare it with the schedulable CPU core range of the container. The NUMA node matching the schedulable CPU core range is the affinity node of the container.
# lscpu ... NUMA node0 CPU(s): 0-23 NUMA node1 CPU(s): 24-47 NUMA node2 CPU(s): 48-71 NUMA node3 CPU(s): 72-95 ...
node0 indicates the NUMA node whose index is 0, and 0-23 indicates the CPU cores in this NUMA node.
- Run the following command to check the NUMA distribution of GPUs in the system. 0200 indicates the NIC device number.
lspci -vvv -d :0200 | grep NUMA
Command output:
NUMA node: 0 NUMA node: 0 NUMA node: 0 NUMA node: 0 NUMA node: 0 NUMA node: 2 NUMA node: 2 NUMA node: 2 NUMA node: 2 NUMA node: 2