Rate This Document
Findability
Accuracy
Completeness
Readability

Installing, Configuring, and Verifying Nova

Nova performs lifecycle management of compute (VM) instances in the OpenStack project, including creating, scheduling, and terminating VMs.

Installing QEMU

Use QEMU 4.0.0. This Arm version does not support live VM migration (but cold migration). This problem has been resolved in Kunpeng openEuler. If live VM migration is required, you are advised to use openEuler.

Perform the following operations on compute nodes.

  1. Install dependency packages.
    1
    yum -y install glib2-devel zlib-devel pixman-devel librbd1-devel libaio-devel
    
  2. Download the source code.
    • Online:
      1
      wget https://download.qemu.org/qemu-4.0.0.tar.xz
      
    • Offline:

      On a computer that can access the Internet, visit https://download.qemu.org/qemu-4.0.0.tar.xz to download the source code and copy it to the target server.

  3. Perform compilation and installation.
    1. Decompress the QEMU package, and go to the directory where QEMU is stored.
      1
      2
      tar -xvf qemu-4.0.0.tar.xz
      cd qemu-4.0.0
      
    2. Configure and install the QEMU package.
      1
      ./configure --enable-rbd --enable-linux-aio
      

      1
      2
      make -j 50
      make install
      
  4. Add the lib.
    1
    sed -i  '$ainclude /usr/local/lib' /etc/ld.so.conf
    
  5. Make the configuration take effect.
    1
    ldconfig
    

Installing libvirt

Use libvirt 5.6. This Arm version does not support live VM migration (but cold migration). This problem has been resolved in Kunpeng openEuler. If live VM migration is required, you are advised to use openEuler.

Perform the following operations on compute nodes.

  1. Install edk2.
    • Online installation

      Run the following commands to install edk2 online, as shown in Figure 1.

      1
      2
      wget https://www.kraxel.org/repos/firmware.repo -O /etc/yum.repos.d/firmware.repo
      yum -y install edk2.git-aarch64
      
      Figure 1 Installing edk2 online
    • Offline installation
      1. Visit https://mirrors.huaweicloud.com/centos/8-stream/AppStream/aarch64/os/Packages/.
      2. Search for the latest edk2-aarch64 RPM package and copy it to the corresponding directory on the target server.
      3. Install edk2 offline. See Figure 2.
        1
        rpm -ivh edk2.git-aarch64*.rpm
        
        Figure 2 Installing edk2 offline

    SSL verification is performed during edk2 installation by default. Therefore, you need to disable SSL verification first.

    1. Open the file.
      vim /etc/yum.conf
    2. Press i to enter the insert mode and add the following content in the blank area:
      sslverify=false
    3. Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
  2. Install dependency packages.
    1
    yum -y install gnutls-devel libnl-devel libxml2-devel yajl-devel device-mapper-devel libpciaccess-devel
    
  3. Install libvirt-5.6.0 by compiling the source code.
    1. Download the source code.
      1
      wget https://libvirt.org/sources/libvirt-5.6.0-1.fc30.src.rpm -O /root/libvirt-5.6.0-1.fc30.src.rpm
      
    2. Perform the following steps to compile the source code:
      1
      2
      3
      4
      cd /root/
      rpm -i libvirt-5.6.0-1.fc30.src.rpm
      yum -y install libxml2-devel readline-devel ncurses-devel libtasn1-devel gnutls-devel libattr-devel libblkid-devel augeas systemd-devel libpciaccess-devel yajl-devel sanlock-devel libpcap-devel libnl3-devel libselinux-devel dnsmasq radvd cyrus-sasl-devel libacl-devel parted-devel device-mapper-devel xfsprogs-devel librados2-devel librbd1-devel glusterfs-api-devel glusterfs-devel numactl-devel libcap-ng-devel fuse-devel netcf-devel libcurl-devel audit-libs-devel systemtap-sdt-devel nfs-utils dbus-devel scrub numad qemu-img rpm-build iscsi-initiator-utils
      rpmbuild -ba ~/rpmbuild/SPECS/libvirt.spec
      

      If an error occurs, use another compilation method:

      1
      rpmbuild --rebuild /root/libvirt-5.6.0-1.fc30.src.rpm
      
    3. Install the libvirt rebuilt.
      1
      yum install -y /root/rpmbuild/RPMS/aarch64/*.rpm
      
    4. Restart the libvirt service.
      1
      systemctl restart libvirtd
      

      If the source code fails to be compiled, use another method to compile and install it:

      1
      wget https://libvirt.org/sources/libvirt-5.6.0.tar.xz -O /root/libvirt-5.6.0.tar.xz
      
      1
      2
      3
      4
      5
      tar -xvf /root/libvirt-5.6.0.tar.xz
      cd /root/libvirt-5.6.0/
      ./autogen.sh --system
      make -j 50
      make install
      
      1
      systemctl restart libvirtd
      
  4. Modify the /etc/libvirt/qemu.conf file.
    1. Add AAVMF.
      1
      nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd","/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2.git/aarch64/vars-template-pflash.raw"]
      

    2. Enable the permission.

      Find user = "root" and group = "root" and uncomment it.

    3. Check the libvirt and QEMU version information.
      1
      virsh version
      

      As shown in the command output, the libvirt version is 5.6.0, and the QEMU version is 4.0.0. The default QEMU version is 2.12.0.

Creating the Nova Database

Perform the following operations on controller nodes.

  1. Connect to the database as the root user.
    1
    mysql -u root -p
    
  2. Create the nova, nova_api, and nova_cell0 databases.
    1
    2
    3
    CREATE DATABASE nova_api;
    CREATE DATABASE nova;
    CREATE DATABASE nova_cell0;
    
  3. Grant a permission for the databases.
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    IDENTIFIED BY '<PASSWORD>';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    IDENTIFIED BY '<PASSWORD>';
    
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    IDENTIFIED BY '<PASSWORD>';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    IDENTIFIED BY '<PASSWORD>';
    
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    IDENTIFIED BY '<PASSWORD>';
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    IDENTIFIED BY '<PASSWORD>';
    
  4. Exit the database.
    1
    exit
    

For OpenStack Rocky, add the Placement database. For OpenStack Stein, the Placement database has been added to the independent Placement component. Ignore this step.

CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '<PASSWORD>'; 
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '<PASSWORD>';

Creating Roles and Users

Perform the following operations on controller nodes.

  1. Log in to the OpenStack CLI as the admin user.
    1
    source /etc/keystone/admin-openrc
    
  2. Create the nova user.
    1
    openstack user create --domain default --password-prompt nova
    
  3. Set a password for the nova user.
  4. Add the admin role to the nova user.
    1
    openstack role add --project service --user nova admin
    
  5. Create the nova entity.
    1
    openstack service create --name nova --description "OpenStack Compute" compute
    
  6. Create compute API service endpoints.
    1
    2
    3
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
    openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
    openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
    

For OpenStack Rocky, add the placement user. For OpenStack Stein, the placement user has been added to the independent Placement component. Ignore this step.

  1. Create the placement user and set a password.
    openstack user create --domain default --password-prompt placement
  2. Add a role.
    openstack role add --project service --user placement admin 
  3. Create a Placement API user and service endpoint.
    openstack service create --name placement --description "Placement API" placement
    openstack endpoint create --region RegionOne placement public http://controller:8778 
    openstack endpoint create --region RegionOne placement internal http://controller:8778 
    openstack endpoint create --region RegionOne placement admin http://controller:8778

Installing and Configuring Nova (Controller Node)

Perform the following operations on controller nodes.

  1. Install components.
    1
    yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
    

    For OpenStack Rocky, install the Placement component here. For OpenStack Stein, the Placement component has been installed independently. Ignore this step.

    yum -y install openstack-nova-placement-api
  2. Edit the /etc/nova/nova.conf file to configure Nova.
    1. Enable compute and metadata APIs, configure RabbitMQ message queue access, and enable the network service.
      1
      2
      3
      4
      5
      6
      7
      [DEFAULT]
      enabled_apis = osapi_compute,metadata
      transport_url = rabbit://openstack:<PASSWORD>@controller
      my_ip = 172.168.201.11
      use_neutron = true
      firewall_driver = nova.virt.firewall.NoopFirewallDriver
      allow_resize_to_same_host = true
      

      my_ip specifies the management IP address of the controller node, and PASSWORD is the password set by the RabbitMQ service for the openstack user.

    2. Configure database access.
      1
      2
      3
      4
      [api_database]
      connection = mysql+pymysql://nova:<PASSWORD>@controller/nova_api
      [database]
      connection = mysql+pymysql://nova:<PASSWORD>@controller/nova
      
    3. Configure Identity service access.
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      [api]
      auth_strategy = keystone
      [keystone_authtoken]
      auth_url = http://controller:5000/v3
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = Default
      user_domain_name = Default
      project_name = service
      username = nova
      password = <PASSWORD>
      
    4. In the /etc/nova/nova.conf file, enable the metadata agent and set the password in the [neutron] section.
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      [neutron]
      url = http://controller:9696
      auth_url = http://controller:5000
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = <PASSWORD>
      service_metadata_proxy = true
      metadata_proxy_shared_secret = <PASSWORD>
      
    5. Configure the VNC proxy to use the management IP address of the controller node.
      1
      2
      3
      4
      5
      6
      7
      [vnc]
      enabled = true
      server_listen = $my_ip
      server_proxyclient_address = $my_ip
      novncproxy_host=0.0.0.0
      novncproxy_port=6080
      novncproxy_base_url=http://172.168.201.11:6080/vnc_auto.html
      

      In the command, 172.168.201.11 is an example only. Use the actual management IP address of the controller node.

    6. Configure the location of the Image service API.
      1
      2
      [glance]
      api_servers = http://controller:9292
      
    7. Configure the lock path.
      1
      2
      [oslo_concurrency]
      lock_path = /var/lib/nova/tmp
      
    8. Configure the access to the Placement service.
      1
      2
      3
      4
      5
      6
      7
      8
      9
      [placement]
      region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:5000/v3
      username = placement
      password = <PASSWORD>
      

      For OpenStack Rocky, configure database access in [placement_database] and change <PASSWORD> to the password set for the database. Ignore this step for OpenStack Stein.

      1. Edit the /etc/nova/nova.conf file.
        [placement_database]
        connection = mysql+pymysql://placement:<PASSWORD>@controller/placement
      2. Modify the /etc/httpd/conf.d/00-nova-placement-api.conf file.
        <Directory /usr/bin> 
        <IfVersion >= 2.4> 
        Require all granted 
        </IfVersion> 
        <IfVersion < 2.4> 
        Order allow,deny 
        Allow from all 
        </IfVersion> 
        </Directory>

      3. Restart the HTTP service.
        systemctl restart httpd
    9. Configure the metadata agent.
      1. Add the following to the /etc/neutron/metadata_agent.ini file:
        1
        2
        3
        [DEFAULT]
        nova_metadata_host = controller
        metadata_proxy_shared_secret = <PASSWORD>
        
      2. Populate the nova-api database.
        1
        2
        3
        4
        su -s /bin/sh -c "nova-manage api_db sync" nova
        su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
        su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
        su -s /bin/sh -c "nova-manage db sync" nova
        
    10. Check whether cell 0 and cell 1 are correctly registered.
      1
      su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
      

    11. Enable the compute service and configure it to start as the system boots.
      1
      2
      3
      4
      systemctl enable openstack-nova-api.service openstack-nova-scheduler.service \
      openstack-nova-conductor.service openstack-nova-novncproxy.service
      systemctl start openstack-nova-api.service openstack-nova-scheduler.service \
      openstack-nova-conductor.service openstack-nova-novncproxy.service
      

Installing and Configuring Nova (Compute Node)

Perform the following operations on compute nodes.

  1. Install components.
    1
    yum -y install openstack-nova-compute
    
  2. Edit the /etc/nova/nova.conf file.
    1. Enable the computing and metadata APIs.
      1
      2
      3
      4
      5
      6
      [DEFAULT]
      enabled_apis = osapi_compute,metadata
      transport_url = rabbit://openstack:<PASSWORD>@controller
      my_ip = 172.168.201.12
      use_neutron = true
      firewall_driver = nova.virt.firewall.NoopFirewallDriver
      

      Set my_ip to the management IP address of the compute node.

    2. Configure Identity service access.
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      [api]
      auth_strategy = keystone
      [keystone_authtoken]
      auth_url = http://controller:5000/v3
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = Default
      user_domain_name = Default
      project_name = service
      username = nova
      password = <PASSWORD>
      
    3. Add the following to the [neutron] section in the /etc/nova/nova.conf file:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      [neutron]
      url = http://controller:9696
      auth_url = http://controller:5000
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      region_name = RegionOne
      project_name = service
      username = neutron
      password = <PASSWORD>
      
    4. Enable and configure the access to the remote console.
      1
      2
      3
      4
      5
      6
      [vnc]
      enabled = true
      server_listen = 0.0.0.0
      server_proxyclient_address = $my_ip
      novncproxy_base_url = http://controller:6080/vnc_auto.html
      vncserver_proxyclient_address = $my_ip
      
    5. Configure the location of the Image service API.
      1
      2
      [glance]
      api_servers = http://controller:9292
      
    6. Configure the lock path.
      1
      2
      [oslo_concurrency]
      lock_path = /var/lib/nova/tmp
      
    7. Configure the Placement API.
      1
      2
      3
      4
      5
      6
      7
      8
      9
      [placement]
      region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:5000/v3
      username = placement
      password = <PASSWORD>
      
    8. Add the following to the [libvirt] section:
      1
      virt_type = kvm
      

      In the nova.conf configuration file, the PCI number of the created VM is 6 by default. To change the PCI number, you can modify the num_pcie_ports parameter in the nova.conf configuration file on the compute node. A maximum value of 15 is supported.

      1
      vim /etc/nova/nova.conf
      

      Open the comment and modify the following configuration:

      1
      num_pcie_ports=15
      

      Restart and log in to the VM, and run the lspci command.

  3. Enable the Compute service and its dependencies and make them to start as the system boots.
    1
    2
    systemctl enable libvirtd.service openstack-nova-compute.service
    systemctl start libvirtd.service openstack-nova-compute.service
    

Adding Compute Nodes to the Cell Database

Perform the following operations on controller nodes.

  1. Log in to the OpenStack CLI as the admin user.
    1
    source /etc/keystone/admin-openrc
    
  2. Check the database host.
    1
    openstack compute service list --service nova-compute
    
  3. Discover hosts.
    1
    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    

    When adding a compute node, run the following commands on the controller node to register the new compute node:

    1
    nova-manage cell_v2 discover_hosts
    

    Alternatively, set an interval so that the control node periodically discovers compute nodes.

    1
    vim /etc/nova/nova.conf
    
    1
    2
    [scheduler]
    discover_hosts_in_cells_interval = 300
    

Verifying Nova

Perform the following operations on controller nodes.

  1. Log in to the OpenStack CLI as the admin user.
    1
    source /etc/keystone/admin-openrc
    
  2. Lists service components.
    1
    openstack compute service list
    
  3. List the API endpoints in the Identity service to verify connectivity with the Identity service.
    1
    openstack catalog list
    
  4. List images in the Glance service.
    1
    openstack image list
    
  5. Check that cells and the placement API are working properly and that other prerequisites are met.
    1
    nova-status upgrade check
    

Common Nova Commands

Command

Description

openstack flavor create <flavor-name> --vcpus 4 --ram 8192 --disk 20

Create a flavor with specified specifications.

openstack server create --flavor m1.nano --image cirros \

--nic net-id=provider --security-group default \

--key-name mykey provider-vm

Creates a VM instance.

openstack server start provider-vm

Starts an instance.

openstack server list

Queries all instances.

openstack server stop vm1

Stops an instance.

openstack server delete vm1

Deletes the selected instance.