Installing the Runtime Package of the Programming Framework
Prerequisites
- The DPU is installed in the PCIe x16 slot of module 1 on the server and connected to the riser card power cable. The riser card can be inserted into slot 2 or slot 5 of the server.
- An OS has been deployed on the DPU.
- The DPU firmware and driver have been installed.
Installing the Software Package
Execute the following software installation and deployment steps once for the initial installation or upgrade.
- Configure SSH accessibility for the DPU OS. For details, see Enabling the Remote Login Permission for the DPU OS User.
- Obtain the Data-Acceleration-Kit-Virtualization_{version}_FlexDA-runtime-VM.tar.gz software package based on Obtaining Software Packages and use the SFTP tool to transfer the package to the DPU.
- Go to the directory that stores this package and decompress the package.
tar zxvf Data-Acceleration-Kit-Virtualization_{version}_FlexDA-runtime-VM.tar.gz cd Data-Acceleration-Kit-Virtualization_{version}_FlexDA-runtime-VM
Ensure that the extracted directory does not contain spaces. Otherwise, the script will fail to execute.
- Obtain the firmware Hinic3_flash.bin generated in Developing OVS Data Plane Code, create a dpak/firmware directory, and copy Hinic3_flash.bin to dpak/firmware.
- Obtain the flow table configuration file configinfo generated in Developing OVS Data Plane Code and deploy it in the DPU environment.
- Method 1: Copy configinfo to the /etc directory in the DPU environment.
- Method 2: Copy configinfo to the firmware directory, where it will be automatically processed for installation and deployment by the following script.
- Use the installation and deployment script to install the software package. This process will install the programming framework driver, O&M tool (dpak-smi), and network packages, while also upgrading the programming framework firmware.
sh dpak_ctl.sh install
- If the preceding command fails to be executed, you are advised to install the software package again or run the sh dpak_ctl.sh uninstall command and then install the software package.
- If the error message "hinicadm: command not found" is displayed, install the driver and firmware according to Installation Process.
- If the execution stalls, press Enter to check for any prompt messages. Proceed with the operation according to the instructions provided in the prompts.
- After the installation is complete, run the reboot command on the host OS to reboot the server OS so that the software package can take effect.
Example command output:
[2025-05-27 20:51:44][INFO] The hiflexda-lib package is installed. [2025-05-27 20:51:44][INFO] Cold upgrade custom firmware for hinic0. Run gray_npu_ver is empty. Please do not remove driver or network device. Loading... Firmware update start: 2025-05-27 20:51:44 [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] [100%][\] Firmware update finish: 2025-05-27 20:52:18 Firmware update time used: 34s Loading firmware image succeed. Set update active cfg succeed! Please reboot OS to take firmware effect. [2025-05-27 20:52:21][INFO] hinic0 Loading firmware image succeed [2025-05-27 20:52:21][INFO] update hinic0 firmware ok for install (reboot host to take effect driver and firmware) [2025-05-27 20:52:21][INFO] update firmware -----------------pass [2025-05-27 20:52:23][INFO] The dpak-smi-dpu package is installed. [2025-05-27 20:52:23][INFO] The dpak-libdpdk_adapter package is installed. [2025-05-27 20:52:24][INFO] The dpak-libovs package is installed.
- Change the NIC configuration template. The following uses the SP925D-VL VM scenario as an example. The template is eth_2x100G_dpu_ecs_blk.
hinicadm3 cfg_template -i hinic0 -s 0
Example command output:
***************** Current Info ******************* [Current ] Cfg template index : 0 ***************** Next Reset Cfg ***************** [Next Reset] Max support index : 1 [Next Reset] Cfg template index : 0 [Next Reset] Firmware support cfg template name: Template[ 0]: eth_2x100G_dpu_ecs_blk Template[ 1]: eth_2x100G_dpu_bms_blk
- You can specify -s template_index to set the configuration template to be started next time.
- Select a template based on service requirements. In bare metal scenarios, use the eth_2x100G_dpu_bms_blk template (template index: 1).
- After the installation is complete, run the reboot command on the host OS to reboot the server OS so that the template can take effect.
- After the DPU OS is started, log in to the DPU OS via SSH and check whether the installed software versions meet the expectations.
- Query version information about the DPU.
dpak-smi info -t basic
Example command output:Model: SP925D-VL Manufacturer: Huawei Card SN: xxxxxxxxxxxxxxxx Card Board ID: 0xee Card BOM ID: 0x1 Card PCB ID: Version A. BMC IP: xx.xx.xx.xx OS Version: openEuler 22.03 (LTS-SP3) BIOS Version: B805 MCU Firmware Version: 3.5.45 MPU Firmware Version: 17.12.5.0 NP Firmware Version: 17.12.5.0 NP-CS Firmware Version: 17.6.3.3 CPLD Firmware Version: 2.4
- Model, Manufacturer, Card Board ID, Card BOM ID, and Card PCB ID are DPU hardware information. The values for DPUs of the same model are fixed.
- Card SN indicates the serial number of a DPU, which uniquely identifies a DPU.
- OS Version, BIOS Version, MCU Firmware Version, CPLD Firmware Version, MPU Firmware Version, and NP Firmware Version indicate the OS,
BIOS , MCU firmware, CPLD firmware, MPU firmware, and NP firmware versions of the DPU, respectively. The versions are not fixed.
- Query the version of the O&M tool.
dpak-smi -v
Example command output:
Component: dpak-smi-dpu Feature: dpak-smi-dpu Version: 26.0.RC1
- Query the version of the network acceleration feature.
dpak-ovs-ctl -v
Example command output:
Component: dpak-runtime Feature: dpak-libovs/dpak-ovs-ctl Version: 26.0.RC1
- Query version information about the DPU.
Parent topic: OVS Deployment