Enabling Confidential Device Passthrough
cVMs in the TEE can be directly connected to PCIe devices, including NICs, drives, and GPUs. This section describes how to enable I/O device passthrough to cVMs, so as to greatly improve I/O performance.
The following devices have been verified as compatible with cVMs.
NICs: SP680, SP580, and HP382
Drives: ES3000 V5 and ES3000 V6
GPUs: NVIDIA A100 and NVIDIA L20
Constraints
Constraints for device passthrough:
- The Virtualized Arm Confidential Compute Architecture with TrustZone (virtCCA) device passthrough function does not support stage 1 system memory management unit (SMMU).
- The virtCCA device passthrough function does not support device authentication.
Constraints for SR-IOV:
- After a VF occupied by a cVM is destroyed in the REE and the cVM is not destroyed, the same PF does not allow creating a VF with the same BDF number. That is because a new VF has security risks such as residual data). Therefore, a VF cannot be created until the cVM is completely destroyed.
- When virtCCA implements confidential device passthrough, once a single VF is directly connected to a cVM, all PFs and VFs of the physical device to which that VF belongs are permanently switched to the secure state and cannot be reverted to the insecure state. If a non-passthrough VF loads the device driver in the REE, reliability issues such as hardware or driver exceptions may occur, potentially leading to device malfunction or even system failure. To avoid these risks, unbind all VF drivers associated with the physical device before directly connecting VFs to a cVM. After the drivers are uninstalled, non-passthrough VFs in the REE cannot be operated.
Procedure
- Enable the virtCCA and SMMU secure-mode initialization.
- Open the grub.cfg file.
vim /boot/efi/EFI/openEuler/grub.cfg
- Press i to enter the insert mode and add the following parameters to the HOST OS location:
virtcca_cvm_host=1 arm_smmu_v3.disable_ecmdq=1 vfio_pci.disable_idle_d3=1

- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
- Open the grub.cfg file.
- Enable the SMMU in the BIOS.
- Log in to the BIOS. For details, see "Accessing the BIOS" in TaiShan Server BIOS Parameter Reference (Kunpeng 920 Processor).
- Choose .
- Set Support Smmu to Enabled and press F10 to save the settings and exit.

- Compile the guest OS.
- Obtain the source code.
- Generate the default configuration.
- Go to the kernel directory and modify the defconfig file.
cd /usr/src/linux-6.6.0-98.0.0.103.oe2403sp2.aarch64/ vim arch/arm64/configs/openeuler_defconfig
The kernel version is subject to the installed kernel source version. Replace the example kernel directory with the actual one.
- Modify the compilation options as follows:
CONFIG_NET_9P=y CONFIG_NET_9P_VIRTIO=y CONFIG_VIRTIO_BLK=y CONFIG_SCSI_VIRTIO=y CONFIG_VIRTIO_NET=y CONFIG_VIRTIO=y CONFIG_VIRTIO_PCI_LIB=y CONFIG_VIRTIO_PCI=y CONFIG_EXT4_FS=y # CONFIG_DEBUG_INFO_BTF is not set CONFIG_SOFTLOCKUP_DETECTOR=y CONFIG_LOCKUP_DETECTOR=y CONFIG_PREEMPT_NONE=y
- Modify Kconfig.
- Modify the /block/Kconfig file.
- Open the drivers/block/Kconfig file.
vim drivers/block/Kconfig
- Press i to enter the insert mode and change tristate "Virtio block driver" to the following:
bool "Virtio block driver"

- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
- Open the drivers/block/Kconfig file.
- Modify the drivers/net/Kconfig file.
- Open the drivers/net/Kconfig file.
vim drivers/net/Kconfig
- Press i to enter the insert mode and change tristate "Virtio network driver" to the following:
bool "Virtio network driver"

- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
- Open the drivers/net/Kconfig file.
- Modify the drivers/virtio/Kconfig file.
- Open the drivers/virtio/Kconfig file.
vim drivers/virtio/Kconfig
- Press i to enter the insert mode and change tristate under config VIRTIO_PCI_LIB to bool.

- Change tristate "PCI driver for virtio devices" as follows:
bool "PCI driver for virtio devices"

- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
- Open the drivers/virtio/Kconfig file.
- Modify the /block/Kconfig file.
- Generating the .config file.
make openeuler_defconfig
- Go to the kernel directory and modify the defconfig file.
- Add the NVMe SSD and NIC driver configurations. Enable compilation options such as BLK_DEV_NVME, NVME_CORE, VXLAN, MLXFW, IOMMUFD, VFIO, MLX5_VFIO_PCI, and MLX5_CORE.
- Open menuconfig.
make menuconfig
- On the menuconfig screen, input / to go to the search screen. On the search screen, input the compilation option to be enabled and press Enter to search for the option.

- After the search is complete, input 1 to enable the dependency option (NVME_CORE in this example).

- Press Space to change the NVME_CORE mode from M to *. After the setting is complete, NVME_CORE is enabled.

- Press Esc twice to return to the previous menu.

- BLK_DEV_NVME is enabled after the dependency option is enabled.

- Enable all compilation options, save the settings, and run the following commands to perform the compilation. The compilation result takes effect after the VM is restarted.
export LOCALVERSION="-$(uname -r | cut -d- -f2-)" make include/config/kernel.release make -j$(nproc) make modules_install make install sync
- Open menuconfig.
- Enable SR-IOV.
The following devices are supported: Huawei ES3000 V6 NVMe SSD (using the NVMe driver), Hi1823 NIC (SP680), and Mellanox NIC (using the MLX5 driver). The Hi1823 NIC supports VF RoCE.
- Create VFs.
- NVMe driver
echo ${VF_NUM} > /sys/class/nvme/nvme0/device/sriov_numvfs # Allocates resources. nvme virt-mgmt -c ${CTRL_ID} -a 7 /dev/nvme0 nvme virt-mgmt -c ${CTRL_ID} -r 0 -a 8 -n 8 /dev/nvme0 nvme virt-mgmt -c ${CTRL_ID} -a 9 /dev/nvme0 # Search for available namespaces. nvme list-ns -a /dev/nvme0 # Create a namespace (if no namespace is available). nvme create-ns /dev/nvme0 --nsze ${NS_SIZE} --ncap ${NS_SIZE} --flbas 0x0 --dps 0 --nmic 0 # Attach the namespace to the CTRL_ID that corresponds to the VF. nvme attach-ns -n ${NSID} -c ${CTRL_ID} /dev/nvme0
CTRL_ID indicates the controller ID corresponding to the VF. The controller ID for the first VF is 0x2, and so on.
NSID indicates the namespace ID. Each VF must be assigned a dedicated namespace.
- Hi1823 and Mellanox NICs
echo ${VF_NUM} > /sys/class/net/eth0/device/sriov_numvfs
- NVMe driver
- Destroy the VFs.
- NVMe driver
echo 0 > /sys/class/nvme/nvme0/device/sriov_numvfs
- Hi1823 and Mellanox NICs
echo 0 > /sys/class/net/eth0/device/sriov_numvfs
- NVMe driver
- Create VFs.
- Query the domain and BDF number of the passthrough device.
lspci -tv

In this example, the domain is 0000 and the BDF number is 17:00.0.
- Add the passthrough device information to the XML file, copy the content of hostdev to dev, and enter the domain and BDF number in source.
<devices> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x17' slot='0x00' function='0x0'/> </source> </hostdev> </devices>
- On the VM, query the passthrough device information.
lspci -tv
