Rate This Document
Findability
Accuracy
Completeness
Readability

Tuning the OS

Tune the OSs of the physical machine and VM to improve server performance.

Physical Machine

Optimize the physical machine OS by modifying GRUB parameters.

  1. Open the /etc/grub2-efi.cfg file.
    vi /etc/grub2-efi.cfg
  2. Press i to enter the insert mode and add the following IOMMU configurations to the end of the kernel parameters:
    iommu.passthrough=1 pci=realloc kvm-arm.vgic_v4_enable=1 audit=0

    kvm-arm.vgic_v4_enable=1 is applicable only to new Kunpeng 920 processor models.

  3. Press Esc, type :wq!, and press Enter to save the file and exit.
  4. Restart the physical machine OS for the configuration to take effect.

Virtual Machine

Configure file descriptor limitations, disable Audit, configure huge pages, and optimize NIC interrupt-core binding to tune the VM OS.

  1. Extend file descriptors.
    1. Open the /etc/security/limits.conf file.
      vi /etc/security/limits.conf
    2. Press i to enter the insert mode and add the following two lines to the file to set the number (soft and hard limits) of file descriptors for each user to 102400. This prevents the maximum number of files that can be opened by the software from being limited to 1024 during the test. If it is limited, the server performance can be affected.
      *   soft     nofile      102400
      *   hard     nofile      102400
    3. Press Esc, type :wq!, and press Enter to save the file and exit.
    4. Restart the VM OS for the configuration to take effect.
  2. Disable Audit and configure huge pages.
    1. Open the /etc/grub2-efi.cfg file.
      vi /etc/grub2-efi.cfg
    2. Press i to enter the insert mode. Add the following huge page configuration to the end of the kernel parameters:
      audit=0
    3. Press Esc, type :wq!, and press Enter to save the file and exit.
    4. Restart the VM OS for the configuration to take effect.
  3. Bind NIC interrupts to cores for optimization.
    1. Create a Bash script file named irq.sh. Write the following content to the file to bind NIC interrupts to specific CPU cores:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      #!/bin/bash
      
      iface=$1
      start_core=$2
      end_core=$3
      
      irq_list=(`cat /proc/interrupts |grep ${iface} | awk -F: '{print $1}'`)
      
      ncpu=$start_core
      
      for irq in ${irq_list[@]}
      do
              echo ${ncpu} > /proc/irq/${irq}/smp_affinity_list
              echo `cat /proc/irq/${irq}/smp_affinity_list`
              (( ncpu+=1 ))
              if (( ${ncpu} > $end_core )); then
                      ncpu=$start_core
              fi
      done
      

      The input parameters for binding interrupts to cores are described as follows:

      • iface: NIC interrupt name, for examples, virtio*-input and virtio*-output. Asterisk (*) indicates the sequence number of the actual network port queue name. Change it based on actual requirements.

        Run the following command to check the network port queue name:

        cat /proc/interrupts
      • start_core: ID of the CPU core that starts the binding, for example, 0. Change it as required.
      • end_core: ID of the CPU core that ends the binding, for example, 1 (if there is only one core). Change it as required.
    2. Stop and disable the irqbalance service.
      systemctl stop irqbalance
      systemctl disable irqbalance
    3. Run the script.
      For a VM with 4 vCPUs, you are advised to bind the virtio*-input.* interrupt to two cores (for example, cores 0 and 1), and bind the virtio*-output.* interrupt to each core (for example, from core 0 to core 3) of the VM in sequence.
      bash irq.sh virtio*-input 0 1
      bash irq.sh virtio*-output 0 3