Installing, Configuring, and Verifying Nova
Nova performs lifecycle management of compute (VM) instances in the OpenStack project, including creating, scheduling, and terminating VMs.
Installing QEMU
Use QEMU 4.0.0. This Arm version does not support live VM migration (but cold migration). This problem has been resolved in Kunpeng openEuler. If live VM migration is required, you are advised to use openEuler.
Perform the following operations on compute nodes.
- Install dependency packages.
1yum -y install glib2-devel zlib-devel pixman-devel librbd1-devel libaio-devel
- Download the source code.
- Perform compilation and installation.
- Decompress the QEMU package, and go to the directory where QEMU is stored.
1 2
tar -xvf qemu-4.0.0.tar.xz cd qemu-4.0.0
- Configure and install the QEMU package.
1./configure --enable-rbd --enable-linux-aio

1 2
make -j 50 make install
- Decompress the QEMU package, and go to the directory where QEMU is stored.
- Add the lib.
1sed -i '$ainclude /usr/local/lib' /etc/ld.so.conf
- Make the configuration take effect.
1ldconfig
Installing libvirt
Use libvirt 5.6. This Arm version does not support live VM migration (but cold migration). This problem has been resolved in Kunpeng openEuler. If live VM migration is required, you are advised to use openEuler.
Perform the following operations on compute nodes.
- Install edk2.
- Online installation
Run the following commands to install edk2 online, as shown in Figure 1.
1 2
wget https://www.kraxel.org/repos/firmware.repo -O /etc/yum.repos.d/firmware.repo yum -y install edk2.git-aarch64
- Offline installation
- Visit https://mirrors.huaweicloud.com/centos/8-stream/AppStream/aarch64/os/Packages/.
- Search for the latest edk2-aarch64 RPM package and copy it to the corresponding directory on the target server.
- Install edk2 offline. See Figure 2.
1rpm -ivh edk2.git-aarch64*.rpm
SSL verification is performed during edk2 installation by default. Therefore, you need to disable SSL verification first.
- Open the file.
vim /etc/yum.conf
- Press i to enter the insert mode and add the following content in the blank area:
sslverify=false
- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
- Online installation
- Install dependency packages.
1yum -y install gnutls-devel libnl-devel libxml2-devel yajl-devel device-mapper-devel libpciaccess-devel
- Install libvirt-5.6.0 by compiling the source code.
- Download the source code.
1wget https://libvirt.org/sources/libvirt-5.6.0-1.fc30.src.rpm -O /root/libvirt-5.6.0-1.fc30.src.rpm
- Perform the following steps to compile the source code:
1 2 3 4
cd /root/ rpm -i libvirt-5.6.0-1.fc30.src.rpm yum -y install libxml2-devel readline-devel ncurses-devel libtasn1-devel gnutls-devel libattr-devel libblkid-devel augeas systemd-devel libpciaccess-devel yajl-devel sanlock-devel libpcap-devel libnl3-devel libselinux-devel dnsmasq radvd cyrus-sasl-devel libacl-devel parted-devel device-mapper-devel xfsprogs-devel librados2-devel librbd1-devel glusterfs-api-devel glusterfs-devel numactl-devel libcap-ng-devel fuse-devel netcf-devel libcurl-devel audit-libs-devel systemtap-sdt-devel nfs-utils dbus-devel scrub numad qemu-img rpm-build iscsi-initiator-utils rpmbuild -ba ~/rpmbuild/SPECS/libvirt.spec

If an error occurs, use another compilation method:
1rpmbuild --rebuild /root/libvirt-5.6.0-1.fc30.src.rpm
- Install the libvirt rebuilt.
1yum install -y /root/rpmbuild/RPMS/aarch64/*.rpm
- Restart the libvirt service.
1systemctl restart libvirtd
If the source code fails to be compiled, use another method to compile and install it:
1wget https://libvirt.org/sources/libvirt-5.6.0.tar.xz -O /root/libvirt-5.6.0.tar.xz
1 2 3 4 5
tar -xvf /root/libvirt-5.6.0.tar.xz cd /root/libvirt-5.6.0/ ./autogen.sh --system make -j 50 make install
1systemctl restart libvirtd
- Download the source code.
- Modify the /etc/libvirt/qemu.conf file.
- Add AAVMF.
1nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd","/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw:/usr/share/edk2.git/aarch64/vars-template-pflash.raw"]

- Enable the permission.
Find user = "root" and group = "root" and uncomment it.

- Check the libvirt and QEMU version information.
1virsh version
As shown in the command output, the libvirt version is 5.6.0, and the QEMU version is 4.0.0. The default QEMU version is 2.12.0.
- Add AAVMF.
Creating the Nova Database
Perform the following operations on controller nodes.
- Connect to the database as the root user.
1mysql -u root -p
- Create the nova, nova_api, and nova_cell0 databases.
1 2 3
CREATE DATABASE nova_api; CREATE DATABASE nova; CREATE DATABASE nova_cell0;
- Grant a permission for the databases.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY '<PASSWORD>';
- Exit the database.
1exit
For OpenStack Rocky, add the Placement database. For OpenStack Stein, the Placement database has been added to the independent Placement component. Ignore this step.
CREATE DATABASE placement; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '<PASSWORD>'; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '<PASSWORD>';
Creating Roles and Users
Perform the following operations on controller nodes.
- Log in to the OpenStack CLI as the admin user.
1source /etc/keystone/admin-openrc
- Create the nova user.
1openstack user create --domain default --password-prompt nova
- Set a password for the nova user.
- Add the admin role to the nova user.
1openstack role add --project service --user nova admin
- Create the nova entity.
1openstack service create --name nova --description "OpenStack Compute" compute
- Create compute API service endpoints.
1 2 3
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
For OpenStack Rocky, add the placement user. For OpenStack Stein, the placement user has been added to the independent Placement component. Ignore this step.
- Create the placement user and set a password.
openstack user create --domain default --password-prompt placement
- Add a role.
openstack role add --project service --user placement admin
- Create a Placement API user and service endpoint.
openstack service create --name placement --description "Placement API" placement openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778
Installing and Configuring Nova (Controller Node)
Perform the following operations on controller nodes.
- Install components.
1yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler
For OpenStack Rocky, install the Placement component here. For OpenStack Stein, the Placement component has been installed independently. Ignore this step.
yum -y install openstack-nova-placement-api
- Edit the /etc/nova/nova.conf file to configure Nova.
- Enable compute and metadata APIs, configure RabbitMQ message queue access, and enable the network service.
1 2 3 4 5 6 7
[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:<PASSWORD>@controller my_ip = 172.168.201.11 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver allow_resize_to_same_host = true
my_ip specifies the management IP address of the controller node, and PASSWORD is the password set by the RabbitMQ service for the openstack user.
- Configure database access.
1 2 3 4
[api_database] connection = mysql+pymysql://nova:<PASSWORD>@controller/nova_api [database] connection = mysql+pymysql://nova:<PASSWORD>@controller/nova
- Configure Identity service access.
1 2 3 4 5 6 7 8 9 10 11
[api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = <PASSWORD>
- In the /etc/nova/nova.conf file, enable the metadata agent and set the password in the [neutron] section.
1 2 3 4 5 6 7 8 9 10 11 12
[neutron] url = http://controller:9696 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = <PASSWORD> service_metadata_proxy = true metadata_proxy_shared_secret = <PASSWORD>
- Configure the VNC proxy to use the management IP address of the controller node.
1 2 3 4 5 6 7
[vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip novncproxy_host=0.0.0.0 novncproxy_port=6080 novncproxy_base_url=http://172.168.201.11:6080/vnc_auto.html
In the command, 172.168.201.11 is an example only. Use the actual management IP address of the controller node.
- Configure the location of the Image service API.
1 2
[glance] api_servers = http://controller:9292
- Configure the lock path.
1 2
[oslo_concurrency] lock_path = /var/lib/nova/tmp
- Configure the access to the Placement service.
1 2 3 4 5 6 7 8 9
[placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = <PASSWORD>
For OpenStack Rocky, configure database access in [placement_database] and change <PASSWORD> to the password set for the database. Ignore this step for OpenStack Stein.
- Edit the /etc/nova/nova.conf file.
[placement_database] connection = mysql+pymysql://placement:<PASSWORD>@controller/placement
- Modify the /etc/httpd/conf.d/00-nova-placement-api.conf file.
<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>

- Restart the HTTP service.
systemctl restart httpd
- Edit the /etc/nova/nova.conf file.
- Configure the metadata agent.
- Add the following to the /etc/neutron/metadata_agent.ini file:
1 2 3
[DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = <PASSWORD>
- Populate the nova-api database.
1 2 3 4
su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova su -s /bin/sh -c "nova-manage db sync" nova
- Add the following to the /etc/neutron/metadata_agent.ini file:
- Check whether cell 0 and cell 1 are correctly registered.
1su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

- Enable the compute service and configure it to start as the system boots.
1 2 3 4
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
- Enable compute and metadata APIs, configure RabbitMQ message queue access, and enable the network service.
Installing and Configuring Nova (Compute Node)
Perform the following operations on compute nodes.
- Install components.
1yum -y install openstack-nova-compute
- Edit the /etc/nova/nova.conf file.
- Enable the computing and metadata APIs.
1 2 3 4 5 6
[DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:<PASSWORD>@controller my_ip = 172.168.201.12 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver
Set my_ip to the management IP address of the compute node.
- Configure Identity service access.
1 2 3 4 5 6 7 8 9 10 11
[api] auth_strategy = keystone [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = <PASSWORD>
- Add the following to the [neutron] section in the /etc/nova/nova.conf file:
1 2 3 4 5 6 7 8 9 10
[neutron] url = http://controller:9696 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = <PASSWORD>
- Enable and configure the access to the remote console.
1 2 3 4 5 6
[vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html vncserver_proxyclient_address = $my_ip
- Configure the location of the Image service API.
1 2
[glance] api_servers = http://controller:9292
- Configure the lock path.
1 2
[oslo_concurrency] lock_path = /var/lib/nova/tmp
- Configure the Placement API.
1 2 3 4 5 6 7 8 9
[placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = <PASSWORD>
- Add the following to the [libvirt] section:
1virt_type = kvm
In the nova.conf configuration file, the PCI number of the created VM is 6 by default. To change the PCI number, you can modify the num_pcie_ports parameter in the nova.conf configuration file on the compute node. A maximum value of 15 is supported.
1vim /etc/nova/nova.confOpen the comment and modify the following configuration:
1num_pcie_ports=15
Restart and log in to the VM, and run the lspci command.

- Enable the computing and metadata APIs.
- Enable the Compute service and its dependencies and make them to start as the system boots.
1 2
systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service
Adding Compute Nodes to the Cell Database
Perform the following operations on controller nodes.
- Log in to the OpenStack CLI as the admin user.
1source /etc/keystone/admin-openrc
- Check the database host.
1openstack compute service list --service nova-compute
- Discover hosts.
1su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
When adding a compute node, run the following commands on the controller node to register the new compute node:
1nova-manage cell_v2 discover_hosts
Alternatively, set an interval so that the control node periodically discovers compute nodes.
1vim /etc/nova/nova.conf1 2
[scheduler] discover_hosts_in_cells_interval = 300
Verifying Nova
Perform the following operations on controller nodes.
- Log in to the OpenStack CLI as the admin user.
1source /etc/keystone/admin-openrc
- Lists service components.
1openstack compute service list
- List the API endpoints in the Identity service to verify connectivity with the Identity service.
1openstack catalog list
- List images in the Glance service.
1openstack image list
- Check that cells and the placement API are working properly and that other prerequisites are met.
1nova-status upgrade check
Common Nova Commands
Command |
Description |
|---|---|
openstack flavor create <flavor-name> --vcpus 4 --ram 8192 --disk 20 |
Create a flavor with specified specifications. |
openstack server create --flavor m1.nano --image cirros \ --nic net-id=provider --security-group default \ --key-name mykey provider-vm |
Creates a VM instance. |
openstack server start provider-vm |
Starts an instance. |
openstack server list |
Queries all instances. |
openstack server stop vm1 |
Stops an instance. |
openstack server delete vm1 |
Deletes the selected instance. |

