Deploying vKAE on VMs
This section uses an 8C16G VM as an example to describe how to deploy vKAE on a VM. Operations include preparing the KAE environment on the VM, installing KAE on the VM, configuring VF passthrough to the VM, verifying the vKAE installation and enabling vKAE.
- Prepare the KAE environment on the VM.
- Install dependencies.
yum -y install kernel-devel-$(uname -r) openssl-devel numactl-devel gcc make autoconf automake libtool patch
Installation of the patch is required when dependencies are installed on the VM. Otherwise, an error will be reported during the KAE installation. See the following figure.

- Obtain the KAE 2.0 source package.
git clone https://gitee.com/kunpengcompute/KAE.git -b kae2

- Install KAE using the source code.
The sh build.sh all installation command can be used for one-click installation of KAE. Before using this command, you are advised to run the sh build.sh cleanup command for cleanup.
- Go to the KAE source code directory and perform the cleanup operation before the installation.
cd KAE sh build.sh cleanup

- Install KAE in one-click mode.
sh build.sh all

If the following information is displayed, KAE is successfully installed.

- Go to the KAE source code directory and perform the cleanup operation before the installation.
- Install dependencies.
- Before using vKAE, create VFs on the KAE device of the server and pass through the VFs to the VM to enable vKAE for acceleration. Use the HPRE accelerator to enable encryption and decryption acceleration.
Check the name of the accelerator contained in KAE that is installed. In later steps, you need to use the accelerator name to search for the PCI IDs corresponding to the accelerator device, and then create VFs based on the PCI IDs.
ls /sys/class/uacce

The command output in this step is only an example. Quantities of accelerators in different servers are different. For example:

- Check the actual path and PCI ID of the hisi_hpre-1 accelerator device.
cd /sys/class/uacce/hisi_hpre-1/device realpath .

- The following uses the hisi_hpre-1 accelerator as an example to describe how to create three KAE VFs to be passed through to the VM.
echo 3 > /sys/devices/pci0000:78/0000:78:00.0/0000:79:00.0/sriov_numvfs
Check whether the VFs of the KAE device are successfully created.
ls -al /sys/class/uacce
There are three virtual acceleration devices in addition to the physical acceleration device.
A server may have multiple HPRE accelerators. Each HPRE accelerator provides 1,024 queues. A PF uses 256 queues by default, and the other 768 queues are reserved for VFs. Number of VF queues = (1024 – Number of PF queues)/Number of VFs. The remainder queues are added to the last VF. You are advised to virtualize one PF into eight VFs.
Run the following command to check the folder where the created VFs are located:
cd /sys/class/uacce/hisi_hpre-1/device ls

- After three KAE VFs are created, three VM acceleration devices virtfn0, virtfn1, and virtfn2 exist in the hisi_hpre-1 accelerator list.Check the PCI IDs of virtfn0, virtfn1, and virtfn2 to pass through the VFs to the VM.
cd virtfn0 realpath .

cd virtfn1 realpath .

cd virtfn2 realpath .

- Modify the configuration file of the VM and pass through the KAE VFs to the VM.
- Configure one KAE VF for the VM.
- Open the VM configuration file, for example, vm01.xml.
vim vm01.xml
- Press i to enter the insert mode. Copy the following content to the <devices> tag in the VM configuration file:
1 2
<hostdev mode='subsystem' type='pci' managed='yes'><source><address domain='0x0000' bus='0x79' slot='0x00' function='0x1'/></source><address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev>
- This operation is to split the VF address 0000:79:00.1. domain uses 0000, bus uses 79, slot uses 00, and function uses 1.
- If the address already exists on the current VM, to prevent VM startup failures caused by conflicting addresses, delete the address line following </source>. After the VM is restarted, a new address is generated.
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Restart the VM for the KAE VF passthrough to take effect.
reboot

After the preceding configuration is complete, the KAE VF is successfully passed through to the VM.
- Open the VM configuration file, for example, vm01.xml.
- Configure multiple KAE VFs for the VM.
- Open the VM configuration file, for example, vm01.xml.
vim vm01.xml
- Press i to enter the insert mode. Copy the following content to the <devices> tag in the VM configuration file:
1 2 3 4
<hostdev mode='subsystem' type='pci' managed='yes'><source><address domain='0x0000' bus='0x79' slot='0x00' function='0x1'/></source><address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'><source><address domain='0x0000' bus='0x79' slot='0x00' function='0x2'/></source><address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </hostdev>
- Restart the VM for the KAE VF passthrough to take effect.
reboot
- Open the VM configuration file, for example, vm01.xml.
- The following provides an example of a complete VM configuration file for your reference.
The configuration file of an 8C16G VM is used as an example. The VM name is nginx1. Core binding in sequence and binding memory to NUMA nodes have been performed for the VM. After modifying the configuration file, restart the VM for the VF passthrough to take effect.
- Open the VM configuration file, for example, vm01.xml.
vim vm01.xml
- Press i to enter the insert mode and copy the following content to the VM configuration file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129
<domain type='kvm'> <name>vm01</name> <uuid>a1d11347-8738-45fb-8944-e3a058f464c9</uuid> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <hugepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> <vcpupin vcpu='4' cpuset='8'/> <vcpupin vcpu='5' cpuset='9'/> <vcpupin vcpu='6' cpuset='10'/> <vcpupin vcpu='7' cpuset='11'/> <emulatorpin cpuset='4-11'/> </cputune> <numatune> <memnode cellid='0' mode='strict' nodeset='0'/> </numatune> <os> <type arch='aarch64' machine='virt-6.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw</loader> <nvram>/var/lib/libvirt/qemu/nvram/nginx1_VARS.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <gic version='3'/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' dies='1' clusters='1' cores='8' threads='1'/> <numa> <cell id='0' cpus='0-7' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/home/images/nginx1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='sda' bus='scsi'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller> <interface type='network'> <mac address='52:54:00:b4:09:bc'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='system-serial' port='0'> <model name='pl011'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x79' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> </devices> </domain>
Modify the parameters in bold as required.
- Restart the VM for the VF passthrough to take effect.
reboot
- Check the VF device again.
ls -al /sys/class/uacce
If the VF device passed through to the VM is not displayed in the command output, the VF is passed through to the VM successfully.
- Open the VM configuration file, for example, vm01.xml.
- Configure one KAE VF for the VM.
- Verify whether the KAE installation and configuration is complete.
- Check whether the KAE device is successfully installed on the VM.
lspci ll /usr/local/lib/engines-1.1
If HPRE Engine and kae.so are displayed in the command output, the KAE device is successfully installed.


- Check whether the VF is successfully mounted to the VM.
ls -al /sys/class/uacce
If the VF device to be passed through is not displayed on the physical machine, the passthrough is successful.
Check the address used by the VF.cd /sys/bus/pci/drivers cd hisi_hpre ls

- Verify the KAE performance.
Configure OpenSSL to use KAE. Run the openssl speed command to compare the RSA encryption and decryption performance when KAE is enabled with that when KAE is disabled.
- Obtain the performance metrics before KAE is enabled.
openssl speed -elapsed rsa2048

- Obtain the performance metrics after KAE is enabled.
export OPENSSL_CONF=/home/openssl.cnf openssl speed -engine kae -elapsed rsa2048

- Obtain the performance metrics before KAE is enabled.
- Check whether the loading module is loaded.
lsmod | grep uacce
If the following information is displayed, the loading module is successfully loaded.

- Check whether the KAE device is successfully installed on the VM.