Binding NIC Interrupts to NUMA Cores
Principles
If a NIC receives large amounts of requests, the NIC generates interrupts to notify the kernel of new data packets. Then, the kernel calls the interrupt handler to copy the data packets from the NIC to the memory. When the NIC has only one queue, the copy of data packets at the same time can be processed by only one core. Therefore, the NIC multi-queue mechanism is introduced. In this way, different cores can obtain data packets from different NIC queues at the same time.
When the NIC multi-queue function is enabled, the OS uses the irqbalance service to determine the CPU core that processes the network data packets in a NIC queue. If the CPU core that processes the interrupt and the NIC are not in the same NUMA node, cross-NUMA memory access is triggered. Therefore, the CPU core that processes the NIC interrupt can be bound to the NUMA node where the NIC is located, thereby reducing extra overheads caused by cross-NUMA memory access and improving network processing performance.


Modification Method
- Stop irqbalance.
1 2
# systemctl stop irqbalance.service # systemctl disable irqbalance.service
- Set the number of NIC queues to the number of CPU cores.
1# ethtool -L ethx combined 48 - Query the interrupt IDs.
1# cat /proc/interrupts | grep $eth | awk -F ':' '{print $1}' - Bind each interrupt to a core based on the interrupt ID. cpuNumber indicates the core ID, which starts from 0.
1# echo $cpuNumber > /proc/irq/$irq/smp_affinity_list