Binding CPUs to a Container
Binding CPUs to a container prevents processes in the container from frequently switching between different CPUs or nodes and accessing remote memory. This reduces latency and resource waste and improves container performance and resource utilization. When creating a Docker container, you are advised to bind CPUs to the memory in 1:1 mode and ensure that the memory access and the bound CPUs are on the same die to achieve optimal performance.
You can bind CPUs by CPU core or by
- By default, processes of different containers may run on the same physical CPU core, which causes CPU resource contention and performance deterioration.
CPU pinning by CPU core is to bind a container to a specific CPU core. In this way, you do not need to frequently switch different processes on a core, preventing CPU resource competition and improving system performance.
- In a multiprocessor system, each processor has its own local memory and can also access the memory of other processors. However, because memory access delays and bandwidths between different processors are different, a performance bottleneck might occur when a remote memory is accessed.
CPU pinning by NUMA node is to bind processes in a container to a specific NUMA node. This prevents cross-die and cross-chip memory access in a Docker container, improving container performance and resource utilization.
- Query the NUMA information.
1numactl -HExpected result:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 130064 MB node o free: 118172 MB node 1 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 node 1 size: 130937 MB node 1 free: 129705 MB node 2 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 node 2 size: 130937 MB node 2 free: 129097 MB node 3 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 11 8 119 120 121 122 123 124 125 126 127 node 3 size: 130935 MB node 3 free: 129229 MB node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
The preceding output shows the CPU core distribution of the Kunpeng 920 7260 processor. CPU cores 0 to 31 are in NUMA 0, CPU cores 32 to 63 in NUMA 1, CPU cores 64 to 95 in NUMA 2, and CPU cores 96 to 127 in NUMA 3. To maximize the performance of the Docker container, you are advised to avoid cross-die and cross-chip memory access during CPU core binding to avoid performance deterioration.
- Bind each container vCPU to a CPU and allocate memory in the same NUMA node to each vCPU.
Create a container named 8u16g_01 based on the Kunpeng 920 7260 processor, allocate eight CPU cores (4 to 11) to the container, and use NUMA 0 with a memory of 16 GB. In this example, the container image is centos:latest.
1docker run -d -i -t --cpus=8 --cpuset-cpus=4-11 --cpuset-mems=0 -m 16g --name 8u16g_01 --privileged=true centos:latest /bin/bash
Command format: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Parameter description:
- -d: runs the container in the background and prints the container ID.
- -i: enables the STDIN to be kept open even if no attach operation is performed.
- -t: allocates a pseudo-TTY.
- --cpus: specifies the number of CPUs to be allocated.
- --cpuset-cpus: specifies the CPUs to be bound. Use - to connect multiple CPU IDs.
- --cpuset-mems: specifies the NUMA node.
- -m: specifies the memory size to be allocated. You need to provide the unit when filling in the value. For example, m indicates MB, and g indicates GB.
- --name: specifies the Docker container name.
- --privileged: enables the root permission in the container.
- centos:latest: specifies the local image. centos indicates the repository, and latest indicates the tag.