我要评分
获取效率
正确性
完整性
易理解

Testing the Performance of One PMD

Case No.

4.1.1

Objective

  1. Test the packet rate of a forwarding core on the server host.
  2. Test the bandwidth of a forwarding core on the server host.

Test Networking

See Figure 1.

Prerequisites

See Building a Test Environment.

Test Procedure

  1. Start the OVS service.
    1
    service openvswitch start
    
  2. Set the boot parameters.
    1
    ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true other_config:dpdk-socket-mem="4096" other_config:dpdk-lcore-mask="0x3" other_config:pmd-cpu-mask="0x2"
    
  3. Restart the OVS service.
    1
    service openvswitch restart
    
  4. Set the VM queue.
    1
    virsh edit VM1
    

    Change the value marked in the following figure to 2.

    Modify VM 1 to VM 8 one by one.

  5. Modify the networking script.
    1. Change PCI addresses of network ports based on 4. You need to configure them only once.
    2. Change the queue number of network ports. Change the values marked in the following figure to 2.

  6. Configure the network.
    1
    2
    sh topology_all.sh setvxlan
    sh topology_all.sh ct
    
  7. Start and log in to a VM based on Configuring the Client Host.
  8. Verify the network environment.

    If every client VM can ping all server VMs, the networking is successful. If not, contact Huawei technical support.

  9. Test the packet rate.
    1. Go to the /home directory of a server VM and run the test script.
      1
      sh run.sh
      
    2. Check the packet rate (rx xxx p/s) of the server.
      1
      vnstat -l
      

    3. Go to the /home directory of all client VMs and run the script on VM 1.
      1
      sh run_8VM.sh
      
      NOTE:

      To test the open source OVS+DPDK, run the run_3_4VM.sh script.

      1
      sh run_3_4VM.sh
      
    4. Check the packet rate of the server.

    5. Check the PMD usage of the server on the server host.
      1
      watch -d -n 1 ovs-appctl dpif-netdev/pmd-rxq-show
      

      NOTE:

      If the sum of the marked values is approaching 100% (generally the sum being more than 90%, a value of dozens of seconds ago and for reference only), and the packet rate of the server does not increase by increasing the sending pressure on the client, it can be inferred that the server PMD has reached the bottleneck. If one client VM is insufficient, run the same command on VM 2 to increase the sending pressure until the sum of PMD usage is close to 100%. Generally, one PMD involves one client VM, and four PMDs involves three to four client VMs.

    6. Check the packet rate.
      1
      vnstat -l
      

      In this case, the packet rate of the client VM must be greater than that of the server VM.

    7. Press Ctrl+C on the server to end statistics collection.

      The average packet rate is smaller than the actual packet rate. This is because a period when the packet loss rate is 0 is included. Therefore, the packet rate needs to be recalculated.

    8. Repeat 9.f to obtain the 1-minute average packet rate.

    9. The sum of packet rates of all VMs is the packet forwarding performance of the PMD.
  10. Test the bandwidth.
    1. End all Netperf processes on the client VM.
      1
      pkill netperf
      
    2. The server remains in the state after testing the packet rate.
    3. Execute the bandwidth test script on a client VM.
      1
      sh bw_8VM.sh
      
    4. Observe the bandwidth of the server to check whether the PMD reaches the bottleneck.
      1
      watch -d -n 1 ovs-appctl dpif-netdev/pmd-rxq-show
      

      The bandwidth of the client is the same as that of the server. If the PMD usage reaches 90% or higher, the bandwidth reaches the bottleneck.

    5. After the bandwidth reaches the bottleneck of the server, recalculate the 1-minute average bandwidth.
      1
      vnstat -l
      

      Total bandwidth of all VMs on the server is the bandwidth forwarding performance of the PMD.

Expected Results

The difference of multiple test results for the same CPU does not exceed 10%.

Test Results

Software offloading: 1.6798 million PPS; OVS+DPDK: 1.0097 million PPS

Remarks

-