我要评分
获取效率
正确性
完整性
易理解

cVM Trustlist on OpenStack

This section explains which cVM features OpenStack supports and which it does not.

Table 1 Feature trustlist and blocklist

Trustlist

  • Creating, starting, forcibly stopping, and deleting cVMs
  • Configuring vCPU pinning and NUMA topologies for cVMs
  • Querying VM information
  • Reporting VM events
  • Managing virtual disk drives and CD-ROM drives
  • Virtual serial ports
  • Virtual NICs
  • Cold VM migration and evacuation

Blocklist

  • VM suspension and resumption
  • VM hibernation and wakeup
  • VM restart
  • Live VM migration
  • VM snapshotting
  • Setting the virtual time and RTC clock compensation speed
  • Secure VM boot
  • vCPU and memory hot-swap
  • Memory overcommitment, memory QoS, and CPU QoS

As OpenStack does not support vCPU hot-swap, memory hot-swap, or memory QoS, it is unnecessary to mask these capabilities.

Creating, Starting, Forcibly Stopping, and Deleting cVMs

  1. Create a flavor for the cVM.
    openstack flavor create cca-flavor --vcpus 4 --ram 8192 --disk 50 \
    --property trait:HW_CPU_AARCH64_HISI_VIRTCCA=required \
    --property sw:qemu_cmdline="tmm-guest,id=tmm0,num-pmu-counters=1" \
    --property hw:mem_secure=true

    You can add the --property sw:swiotlb='${swiotlb_mb}' setting to specify the SWIOTLB size of a cVM. This setting indicates only the non-secure memory resource occupied by a cVM on OpenStack, and does not affect the non-secure memory size occupied by the cVM. The value of ${swiotlb_mb} must be greater than 0 and aligned with 64 MB. If this parameter is not added, it is defaulted to 128 MB.

    openstack flavor create cca-flavor --vcpus 4 --ram 8192 --disk 50 \
    --property trait:HW_CPU_AARCH64_HISI_VIRTCCA=required \
    --property sw:swiotlb='${swiotlb_mb}'
    --property sw:qemu_cmdline="tmm-guest,id=tmm0,num-pmu-counters=1" \
    --property hw:mem_secure=true
  2. Create a cVM.
    openstack server create --image openEuler-image --flavor cca-flavor --network public-network cca-server
  3. Forcibly stop the cVM.
    openstack server stop cca-server
  4. Start the cVM.
    openstack server start cca-server
  5. Delete the cVM.
    openstack server delete cca-server

Configuring vCPU Pinning and NUMA Topologies for cVMs

  1. Modify the /etc/nova/nova.conf file of a compute node to set the CPU pinning range of the compute node.
    [compute]
    cpu_dedicated_set = "0-63"
    cpu_shared_set = "64-127"
    • If cpu_dedicated_set is not configured, vCPU pinning is not available for cVMs on OpenStack.
    • If only cpu_dedicated_set is configured, OpenStack allows using only the CPUs within the range specified by this parameter for CPU pinning, and does not allow non-CPU-pinning VMs.
    • If both cpu_dedicated_set and cpu_shared_set are configured, OpenStack allows deploying VMs with CPU pinning and VMs without CPU pinning at the same time. For VMs with CPU pinning, CPUs are assigned from cpu_dedicated_set. For VMs without CPU pinning, CPUs are assigned from cpu_shared_set.
  2. Modify the /etc/nova/nova.conf file of the controller node, configure filter_scheduler, and add NUMATopologyFilter to the enabled_filters option.
    [filter_scheduler]
    enabled_filters = ...,NUMATopologyFilter
    available_filters = nova.scheduler.filters.all_filters
  3. Create a flavor with CPU pinning and NUMA topology attributes. In this example, a cVM with four cores and four NUMA nodes is created. Each NUMA node is allocated 2048 MB memory and bound to a CPU.
    openstack flavor create cca-numa-flavor --vcpus 4 --ram 8192 --disk 50 \
    --property hw:numa_nodes='4' \
    --property hw:numa_mem.0=2048 \
    --property hw:numa_mem.1=2048 \
    --property hw:numa_mem.2=2048 \
    --property hw:numa_mem.3=2048 \
    --property hw:numa_cpus.0="0" \
    --property hw:numa_cpus.1="1" \
    --property hw:numa_cpus.2="2" \
    --property hw:numa_cpus.3="3" \
    --property hw:cpu_policy='dedicated' \
    --property trait:HW_CPU_AARCH64_HISI_VIRTCCA=required \
    --property sw:qemu_cmdline="tmm-guest,id=tmm0,num-pmu-counters=1" \
    --property hw:mem_secure=true

    CPU pinning and NUMA configuration must work together. Otherwise, an error will be reported when you create a VM.

  4. Use the flavor created in 3 to create a VM.
    openstack server create --image openEuler-image --flavor cca-numa-flavor --network public-network cca-num-server

Querying VM Information

  1. View information about all deployed VMs.
    openstack server list
  2. View information about a specific VM.
    openstack server show cca-server
  3. View secure memory information.
    1. Obtain the compute node ID.
      openstack resource provider list

    2. Query the resource inventory based on the node ID, including secure memory.
      openstack resource provider inventory list ${resource_provider_id}

    3. View the resource usage. SECURE_NUMA_x does not provide real-time resource usage information.
      openstack resource provider usage show ${resource_provider_id}

      The secure NUMA memory resource information is updated in real time based on the inventory table. The current inventory table update mechanism may cause update errors due to update generation differences. This does not affect resource updating.

Managing Virtual Disk Drives

  1. Create an empty volume, whose size unit is GB. By default virtio-blk is used.
    openstack volume create --size 5 ${volume_name}
  2. View the ID of the empty volume.
    openstack volume list

  3. When creating a VM, use volume id to specify the empty volume.
    openstack server create --image openEuler-image \
    --flavor cca-flavor \
    --network public-network \
    --block-device source_type=volume,disk_bus=virtio,uuid=${volume id} \
    volume-server
  4. After deploying the VM instance, run the virsh command to access the VM. The name of the VM created on OpenStack is instance-xxx.
    virsh list
    virsh console ${domain_id}
  5. After entering the user name and password, run the following command to view the created virtual disk drive:
    lsblk

    Before creating a VM, set the image attributes to enable SCSI.

    openstack image set ${image-id}  --property hw_scsi_model=virtio-scsi --property hw_disk_bus='scsi'

    After logging in to the VM, run the following command to check whether SCSI has been enabled:

    lsblk -o NAME,TRAN,SUBSYSTEMS

Managing the Virtual CD-ROM Drive

  1. Prepare an ISO file and upload it to OpenStack.
    openstack image create ${iso-image-name} --file ./your-image.iso --disk-format iso
  2. Create a storage volume based on the ISO file. The created volume must be larger than the ISO file.
    openstack volume create --image ${iso-image-uuid} --size 1 ${iso-volume-name}
  3. Mount the CD-ROM drive volume to the specified cVM instance.
    openstack server add volume ${cvm_name} ${iso-volume-name}
  4. After logging in to the VM using the virsh command, you can run the mount command to view the ISO file content.
    lsblk 
    mount /dev/vdb /mnt 
    ls /mnt

    If a mount error is reported stating "libvirt.libvirtError: internal error: unable to execute QEMU command 'blockdev-add': aio=native was specified, but is not supported in this build", AIO is not enabled during QEMU compilation.

    Install the libaio-devel library in the QEMU compilation environment and recompile QEMU. QEMU supports aio by default.

    yum install libaio-devel -y

Configuring a NIC for a cVM

  1. Create a cVM by following instructions in Environment Deployment.
  2. Configure the IP address.

    Log in to the VM by running virsh console and check whether the IP address has been automatically configured for DHCP. If the IP address has not been configured, perform the following operations to manually configure it:

    1. Open the /etc/sysconfig/network-scripts/ifcfg-eth0 file.
      vim /etc/sysconfig/network-scripts/ifcfg-eth0
    2. Press i to enter the insert mode and modify the configuration file as follows:
      TYPE=Ethernet
      BOOTPROTO=static
      DEFROUTE=yes
      DEVICE=eth0
      ONBOOT=yes
      IPADDR=xx.xx.xx.xx
      PREFIX=xx
      STP=yes
    3. Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
  3. Configure an IP address for the external network bridge br-ex of OpenStack.
    ip addr add xx.xx.xx.xx/xx dev br-ex
    ip link set br-ex up
  4. Add security group rules.
    1. Display the security group list.
      openstack security group list
    2. View the default security group rule.
      openstack security group rule list default
    3. Configure the security group rule.
      openstack security group rule create --proto icmp --remote-ip xx.xx.xx.xx/xx default
      openstack security group rule create --proto tcp --dst-port 22 --remote-ip xx.xx.xx.xx/xx default

      To allow any IP address to access the VM through br-ex, set --remote-ip to 0.0.0.0/0.

Cold Migrating a cVM

  1. Configure the certificate.

    On OpenStack, the scp command is run to complete the migration. Therefore, you need to configure a certificate for communication between nodes. The node on which the VM instance is originally deployed is referred to as the source node, while the node it is deployed to after migration is referred to as the target node.

    1. Perform the following operations on the source node.
      1. Run the following command as user nova to generate a key pair. You are advised to use the default configuration for creating a key pair.
        sudo -u nova ssh-keygen
      2. Obtain the public key content.
        cat /var/lib/nova/.ssh/xxx.pub
    2. Perform the following operations on the target node.
      Write the public key of the source node obtained from 1.a.ii to the authorized_keys file to ensure that the nova user has the read permission on the file.
      cd /var/lib/nova/.ssh
      vim authorized_keys
      chown nova:nova /var/lib/nova/.ssh/*
    3. Perform the following operations on the source node.
      Run the ssh command to log in to the target node as the nova user. If the login is successful without a password, the certificate configuration has taken effect.
      sudo -u nova ssh -o BatchMode=yes nova@{target_node_ip}

      If "Host key verification failed" is displayed, run the following command to add the host key of the target node to the known_hosts file of the nova user on the source node:

      sudo -u nova ssh-keyscan -H {target_ip} >> /var/lib/nova/.ssh/known_hosts
  2. Perform the migration.
    1. Run the migration command on the controller node. The Host parameter specifies the target node name.
      openstack server migrate  --host ${target_node_name} ${cvm_name} --os-compute-api-version 2.56
    2. Wait a few minutes and manually confirm that the migration has completed.
      openstack server resize confirm ${cvm_name}

      • If the following information is displayed after the confirmation command is executed, the VM migration status has been updated. Run the confirmation command later.

      • If a data volume is attached to a VM, the cold migration will fail.
      • If a cold migration fails, run OpenStack commands to delete the VM. The residual domain definition file can still be displayed by running the virsh command. You need to manually run the following command to delete the file:
        virsh undefine instance-xxx

cVM Evacuation

  1. The evacuation command can be executed only when the VM node fails. To facilitate verification, run the following command to set the node service to down:
    openstack compute service set --down ${node_name} nova-compute --os-compute-api-version 2.11
  2. Run the evacuation command.
    openstack server evacuate ${cvm_name}
  3. After the evacuation is complete, check the VM details. You can see that the VM instance is running properly on another node.
    openstack server show ${cvm_name}