Installing the DPAK SDK
DPAK applications are built based on the SP680 SmartNIC. The SP680 SmartNIC is suitable for virtualization and protocol parsing scenarios. The following describes how to deploy the virtualization and OVS software matching the SP680 SmartNIC. The deployment scripts are for reference only.
Deploying the Environment for OVS Offload
- Set up the environment
- Single-node deployment is supported. Install the SP680 SmartNIC on a physical machine. Enable input/output memory management unit (IOMMU) in the BIOS, configure huge pages, and install the SmartNIC driver.
- Disable SELinux and firewall on the node. Make sure the node can access the Internet.
- Make sure that the OS is openEuler 20.03 LTS SP1 running on physical machines.
- Installation procedure
- Before installing the NIC driver, obtain the driver package and Hinic3_flash.bin file and save them to the ovs_build/driver directory.
1 2
cd ovs_studio bash install_driver.sh
Restart the server after the driver is installed.
- Enable the virtio-net device.
1 2
cd ovs_studio bash virtio_enable.sh
After the server is rebooted, the virtio-net device is disabled. You need to enable it. To enable the script to be automatically executed upon system startup, perform the following operations:
- Add a service to the script. Refer to the following virtio-enable.service file:
1 2 3 4 5 6 7 8 9
[Unit] Description=DPAK virtio enable script [Service] Type=simple ExecStart=sh <path>/virtio_enable.sh [Install] WantedBy=multi-user.target
Replace <path> with the absolute path of the script.
- Place the virtio-enable.service file in the /usr/lib/systemd/system directory and set the file to be automatically executed upon system startup.
1systemctl enable virtio-enable
- Deploy OVS. Before installation, obtain the DPU-solution-dpak-runtime-host-repo_1.0.0_aarch64.zip package, decompress it, and save it to the ovs_build/package directory. If DPAK, DPDK, and OpenSwitch use their own packages, save the corresponding RPM packages in the empty ovs_build/package directory.
1cd ovs_studio bash install_env.sh
- Add a service to the script. Refer to the following virtio-enable.service file:
- Before installing the NIC driver, obtain the driver package and Hinic3_flash.bin file and save them to the ovs_build/driver directory.
Deploying the Environment for Virtualization Offload
- Set up the environment
- Four-node deployment is supported. OpenStack consists of the controller node, compute nodes, network node, and storage node. Four servers are used for installation: one for the controller node, one for the storage node, and two for the compute nodes. The network node and controller node are deployed on the same server.
- Install the SP680 SmartNIC on the compute nodes only. Enable IOMMU in the BIOS, configure huge pages, and install the SmartNIC driver.
- Disable SELinux and firewall on all nodes. Make sure the nodes can access the network.
- Make sure that the OS is openEuler 20.03 LTS SP1 running on physical machines.
- Installation procedure
- Configure the hosts file for all nodes. Add the following content to the /etc/hosts file:
1 2 3 4
xxx.xxx.xxx.xxx controller xxx.xxx.xxx.xxx compute01 xxx.xxx.xxx.xxx compute02 xxx.xxx.xxx.xxx storager
xxx.xxx.xxx.xxx indicates the IP address of a specific node.
- Install the Python dependency before deploying the controller node and compute nodes.
1pip3 install pymysql
- Deploy the controller node.Create an .admin-openrc file in the /root directory. The file content is as follows:
1 2 3 4 5 6 7 8
export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=123456 export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
Make the settings take effect.
1source /root/.admin-openrc
Run the deployment script.
1 2 3
cd dpak_studio hostnamectl set-hostname controller python3 install.py
- Replace controller in http://controller:5000/v3 with the controller node IP address.
- During the installation, you need to enter the password for multiple times. Automatic deployment supports only the password 123456.
- Deploy compute nodes. Before the installation, obtain the DPAK software package and save it to the dpak_studio directory. Enable password-free login between compute nodes.
Compute node 1:
1 2 3
cd dpak_studio hostnamectl set-hostname compute01 python3 install.py
Compute node 2:
1 2 3
cd dpak_studio hostnamectl set-hostname compute02 python3 install.py
- Deploy the storage node. Configure password-free login to the controller node and compute nodes. Before the deployment, install crudini on the storage node.
1 2 3 4 5
wget https://github.com/pixelb/crudini/releases/download/0.9.3/crudini-0.9.3.tar.gz tar -xf crudini-0.9.3.tar.gz cd crudini-0.9.3 sed -i "s?env python?env python3?" crudini cp crudini /usr/bin
Install Ceph.
1 2 3 4 5
hostnamectl set-hostname storager cd dpak_studio mkdir /etc/ceph cp ceph_install.sh /etc/ceph cd /etc/ceph
enp125s0f0 indicates the network port on the storage plane. sdb, sdc, and sdd indicate the empty block devices used by OSDs.
1sh ceph_install.sh enp125s0f0 sdb sdc sdd
- Configure the hosts file for all nodes. Add the following content to the /etc/hosts file: