Configuring the Server
On the server, configure the file descriptor restriction, SELinux status, audit service and SR-IOV passthrough of NIC to the VM, and bind NIC interrupts to cores.
- Increase the number of file descriptors on the server.
- Open the file.
vi /etc/security/limits.conf
- Press i to enter the insert mode and add the following content to the file. Set the soft and hard limits for all users (*), that is, set the number of file descriptors for both parameters to 102400.
* soft nofile 102400 * hard nofile 102400

- Press Esc, type :wq!, and press Enter to save the file and exit.
- Log out the SSH terminal.
logout
Log in to the SSH terminal again for the new host name to take effect.
- Open the file.
- SELinux may restrict the access permission of applications. Therefore, disable SELinux to ensure that applications run properly. Disabling SELinux may impair system security and make the system more vulnerable to potential security problems and attacks. Assess the potential risks before disabling SELinux.
- Disable SELinux temporarily.
This operation becomes invalid after a server reboot.
1setenforce 0
- Disable SELinux permanently.
This operation takes effect after a server reboot.
- Modify the configuration file to disable SELinux.
1 2
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config cat /etc/selinux/config
If "SELINUX=disabled" is displayed, as shown in the following figure, the modification is successful.

- Restart the server after disabling SELinux. Similarly, restart the VM after disabling SELinux on the VM.
- Modify the configuration file to disable SELinux.
- Disable SELinux temporarily.
- Disable the audit service.
- Open the file.
vim /boot/efi/EFI/openEuler/grub.cfg
- Press i to enter the insert mode. Add audit=0 to the kernel startup commands of the corresponding OS version. See the following figure.

- Press Esc, type :wq!, and press Enter to save the file and exit.
- Restart the server for the settings to take effect.
- Open the file.
- Pass through a NIC to the VM in the VF passthrough mode.
On the server, pass through a NIC to the VM in the VF passthrough mode to establish the communication between the VM and external network. When httpress is used for a test, the client can run commands on another server to connect to the Nginx service of the local server.
- Create three VFs for the NIC. The quantity of created VFs are subject to actual requirements.
echo 3 > /sys/class/net/enp7s0/device/sriov_numvfs
- Obtain the information about bus-info of the NIC.
ethtool -i enp7s0

- Query the NUMA affinity of a NIC node.
cat /sys/class/net/enp7s0/device/numa_node
The physical NIC has affinity with NUMA node0.

You can also run the following command to query the NUMA affinity of a NIC node:
lspci -vvv -s 07:00.0 | grep NUMA
The physical NIC has affinity with NUMA node0.
Check the IDs of the CPU cores where NUMA node0 is located.ls cpu
The IDs of the CPU cores corresponding to NUMA node0 are from 0 to 31.

- After three VFs are created for the physical NIC, check the bus information of the physical and virtual NICs.
cd /sys/class/net/enp7s0 cd /device ls

Check the NUMA affinity of virtfn2.
cd virtfn2 cat numa_node
The NIC VF and physical NIC node who creates the VF have the same NUMA affinity, that is, NUMA node0.

- Check the PCI IDs of the NIC VFs.
- Method 1:
cd virtfn0 realpath .

cd virtfn1 realpath .

cd virtfn2 realpath .

- Method 2:
ethtool -i enp7s0v0

ethtool -i enp7s0v1

ethtool -i enp7s0v2

The PCI IDs increase in ascending order.
- Method 1:
- Run the ip a command to check the three NIC VFs that are created. They are named enp7s0v0, enp7s0v1, and enp7s0v2.
ip a

- Stop the VM.
virsh shutdown <VM_name>
- Modify the VM configuration file. Copy the following content to the <devices> tag in the VM configuration file to pass through the VFs to the VM:
1 2 3 4 5 6
<hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev>
- Start the VM and check whether the configuration takes effect.
virsh start <VM_name>
After the VM is started, run the ip a command to check the NIC.
ip a
If the virtual NIC node that is passed through is not displayed in the command output, the VF passthrough to the VM is successful.
- Create three VFs for the NIC. The quantity of created VFs are subject to actual requirements.
- Bind NIC interrupts to cores.
Bind the NIC interrupts to the affinity node of the NIC. In this example, the VF of the virtual NIC enp7s0v0 has affinity with NUMA node0.
- Create a core binding script file named irq_server.sh in the /home directory on the server.
vim irq_server.sh
- Copy the following content to the core binding script file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
#!/bin/bash # chkconfig: - 50 50 # description: auto irq # Obtain the CPU where the NIC is located. function get_default_cpu(){ eth_numa_node=`cat /sys/class/net/${eth}/device/numa_node` numa_nodes=`lscpu | grep node\(s | awk '{print $NF}'` cpus=`lscpu | grep CPU\(s | head -1 | awk '{print $NF}'` sockets=`lscpu | grep Socket\(s | awk '{print $NF}'` cpus_per_socket=`lscpu | grep Core\(s | awk '{print $NF}'` numa_per_socket=$((${numa_nodes} / ${sockets})) eth_socket=$((${eth_numa_node} / ${numa_per_socket})) first_cpu=$[$[$[${cpus_per_socket}*${eth_socket}]]] last_cpu=$[$[${cpus_per_socket}*$[${eth_socket}+1]]-1] cpurange="${first_cpu}-${last_cpu}" } # Obtain the CPU queue based on parameters. function get_cpu_list(){ IFS_bak=$IFS IFS=',' cpurange=($1) IFS=${IFS_bak} cpulist_arr=() n=0 for i in ${cpurange[@]};do start=`echo $i | awk -F'-' '{print $1}'` stop=`echo $i | awk -F'-' '{print $NF}'` for x in `seq $start $stop`;do cpulist_arr[$n]=$x let n++ done done } # Bind interrupts to cores. function bind(){ ethtool -L ${eth} combined ${cnt} irq=`cat /proc/interrupts| grep ${eth} | awk -F ':' '{print $1}'` i=0 for irq_i in $irq do if [ $i -ge ${#cpulist_arr[*]} ]; then i=0 fi echo ${cpulist_arr[${i}]} "->" $irq_i echo ${cpulist_arr[${i}]} > /proc/irq/$irq_i/smp_affinity_list let i++ done } # Read the information about the CPU bound to the NIC. function check(){ ethtool -l $eth irq=`cat /proc/interrupts | grep ${eth} | awk -F ':' '{print $1}'` for irq_i in $irq do cat /proc/irq/$irq_i/smp_affinity_list done } [[ $2 ]] && eth=$2 || eth=`ifconfig | grep -B 1 "192.168" | head -1 | awk -F":" '{print $1}'` echo "$eth" [[ $3 ]] && cnt=$3 || cnt=48 [[ $4 ]] && cpurange=$4 || get_default_cpu get_cpu_list $cpurange [[ $1 ]] && $1 || bind
Example
- Run the following command to set the queue depth to 48 for the NIC whose network segment is 192.168 by default. That is, bind the NIC to the first 48 cores of the CPU:
sh irq.sh
- Read the information about the cores bound to the NIC.
sh irq.sh check eth1
eth1 indicates the NIC name. Replace it with the actual one.
- Change the queue depth for the eth1 NIC to 24 and bind the interrupts to the four '0 1 2 3' cores cyclically. Consecutive core binding is supported, for example, '1-3,6,7-9'.
sh irq.sh bind eth1 24 '0 1 2 3'
- Run the following command to set the queue depth to 48 for the NIC whose network segment is 192.168 by default. That is, bind the NIC to the first 48 cores of the CPU:
- Run the command to bind NIC interrupts to cores.
sh irq_server.sh bind enp7s0v0 4 '32-35'
- Check whether NIC interrupts are successfully bound to cores.
sh irq_server.sh check enp7s0v0
- Create a core binding script file named irq_server.sh in the /home directory on the server.