Binding NICs to CPUs
- Perform the following operations each time the server is restarted.
- If multiple NICs are used on the server, perform the following steps for each NIC.
When the CPU occupied by the network service overlaps with the CPU bound to the container, the CPU resources in the container may be abnormal. To avoid this problem, bind NICs with heavy traffic and heavy load to idle CPUs.
- Run the following command to query the PCI device number of a NIC. This document uses NIC enp125s0f1 as an example.
1ethtool -i enp125s0f1 | grep bus-info | awk '{print $2}'
In the following command output, the PCI device number of enp125s0f1 is 0000:7d:00.1.
0000:7d:00.1
- Run the following command to query the interrupts related to the NIC.In the command, ${id_pci} indicates the NIC device number obtained in 1.
1cat /proc/interrupts | grep "${id_pci}" | awk -F: '{print $1}'
In the following command output, the interrupts corresponding to the NIC are 358 and 359.
358 359
If the NIC involves a large number of interrupts, check whether the interrupts are bound to different CPUs and determine whether to change the bound CPUs based on the check result.
- Query the CPUs to which the interrupts are bound. ${break_value} in the command is the NIC interrupt ID queried in the previous step.
1cat /proc/irq/${break_value}/smp_affinity_list
- If the interrupts are bound to different CPUs and the CPUs bound to the NIC interrupts do not conflict with the CPUs bound to the container, skip the following steps in this section.
- If most of the interrupts are on the same CPU or the NIC interrupts with high CPU usage need to be bound to an idle CPU, bind the NIC interrupts to the reserved CPU by referring to 4 and 5. The NUMA node CPU to which the NIC belongs is preferred.
- Check the NUMA node to which the NIC connects based on the PCI device number.In the command, ${id_pci} indicates the device number of the NIC. You can check the device number based on 1. Run the following command. The value of NUMA node in the command output is the NUMA node to which the NIC connects.
1lspci -vvvs ${id_pci}
As shown in the following figure, the NUMA node of enp125s0f1 obtained based on the PCI device number is 0.
7d:00.1 Ethernet controller: Huawei Technologies Co., Ltd. HNS GE/10GE/25GE Network Controller (rev 21) Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 NUMA node: 0 Region 0: Memory at 121040000 (64-bit, prefetchable) [size=64K] Region 2: Memory at 120400000 (64-bit, prefetchable) [size=1M] Capabilities: [40] Express (v2) Endpoint, MSI 00 - Bind NIC interrupts to the reserved CPU. The NUMA node CPU to which the NIC belongs is preferred.
In the following commands, ${break_1} and ${break_2} are the IDs of the two NIC interrupts.
- Bind interrupt ${break_1} to CPU 1.
1echo 1 > /proc/irq/${break_1}/smp_affinity_list
- Bind interrupt ${break_2} to CPU 2.
1echo 2 > /proc/irq/${break_2}/smp_affinity_list
Take NIC enp125s0f1 as an example. The corresponding interrupts are 358 and 359, and the corresponding commands are as follows:
1 2
echo 1 > /proc/irq/358/smp_affinity_list echo 2 > /proc/irq/359/smp_affinity_list
After obtaining the NUMA node to which the NIC connects, run the following command to view the core range of the NUMA node:
1lscpu
As shown in the command output, the core range of NUMA node0 is 0 to 31.
NUMA node0 CPU(s): 0-31 NUMA node1 CPU(s): 32-63 NUMA node2 CPU(s): 64-95 NUMA node3 CPU(s): 96-127
- Bind interrupt ${break_1} to CPU 1.