Rate This Document
Findability
Accuracy
Completeness
Readability

Network Acceleration

As data centers develop rapidly, network transmission bandwidth increases, and CPU resources of data centers are increasingly occupied by infrastructure. As a result, service deployment and management on servers become more complex, and effective CPU resources decrease. With DPUs, infrastructure tasks can be offloaded from CPUs to DPUs. This approach releases CPU resources to run service applications, improving server efficiency. The network acceleration function offloads some OVS workloads to DPUs and supports hardware flow entry issuance to achieve high-speed data forwarding while reducing CPU usage.

The network acceleration SDK consists of three types of modules: vSwitch, libdpak_ovs library, and device drivers.

  • vSwitch: Contains the OVS and DPDK components. OVS uses standard DPDK interfaces to forward network packets.
  • Network acceleration library: The libdpak_ovs library encapsulates the user-space NIC driver into a DPDK standard driver.
  • Device drivers: Include the user-space NIC driver and kernel-space NIC driver. The device drivers work with the DPU hardware engine to implement fast forwarding of network data, and support functions such as bond hardware offload, flow entry issuance, and programmability.

Implementation Principles

On the host, a DPU is presented as multiple VirtIO-net Physical Functions (PFs) or Virtual Functions (VFs), which are used as packet forwarding interfaces.

The DPU receives the first data packet and sends the packet through the DPDK upcall queue to vSwitch software. vSwitch executes the upcall process, generates a flow entry, and issues it to libdpak_ovs. Then, libdpak_ovs offloads this software flow entry to hardware through the user-space driver. For subsequent identical packets, the DPU can directly forward them based on the exact flow table without sending them to vSwitch for software-based table lookup and forwarding.

In the FlexDA programming framework, developers can extend existing basic functions to meet custom requirements.

Specific Modules

Figure 1 Modules of the network acceleration feature
  • dpak-ovs-ctl: Network offload O&M command line tool. It sends operation instructions to libdpak_ovs.so through the Unix socket and displays the returned execution results.
  • libdpak_ovs.so: Network offload driver. It implements network offload functions based on the smartIO user-space driver and complies with the DPDK driver framework.
  • OVS patch: A reference OVS patch for network offload. You may use self-developed vSwitches instead of OVS-based modification.
  • hydrainfo.h: Header file for defining custom key/action structures and enumerations.
  • configinfo: Configuration file for customizing the size, type, and capacity of multiple tables.
  • User-space device driver: Provides APIs for libdpak_ovs.so to implement network acceleration functions.
  • Kernel-space device driver (host): Manages NIC hardware on the host.
  • Kernel-space device driver (DPU): Manages NIC hardware on the DPU.
  • bin: Firmware generated via the programming framework. You can customize a data plane pipeline, including packet parsing logic, key matching logic, and action execution logic.
  • OVS: Open vSwitch software, which implements packet forwarding and flow table learning. OVS is used as an example. You can use self-developed vSwitch. vSwitch development should use standard DPDK APIs.
  • DPDK: Open source data plane forwarding framework, which supports user-space packet forwarding and achieves higher software forwarding efficiency through dedicated CPUs.