Installing and Configuring the Storage Nodes
Perform the following operations on all storage nodes (x86-compute and arm-compute). Block storage can use LVM local volumes or Ceph remote storage. Storage nodes can be configured in either mode.
Installing and Configuring LVM Local Volumes (Option 1)
Perform the following operations on all storage nodes (x86-compute and arm-compute).
- Install the LVM package.
1yum -y install lvm2 device-mapper-persistent-data
LVM is contained in some distributions by default and does not need to be installed.
- Enable the LVM metadata service and configure it to start as the system boots.
1 2
systemctl enable lvm2-lvmetad.service systemctl start lvm2-lvmetad.service
- Create the LVM physical volume /dev/sdb.
1pvcreate /dev/sdb - Create the LVM volume group cinder-volumes.
You are advised to select a drive partition or drive that is not used by the OS to avoid data loss.
1vgcreate cinder-volumes /dev/sdb
- Modify the /etc/lvm/lvm.conf configuration file.
- Open the file.
1vi /etc/lvm/lvm.conf - Press i to enter the insert mode. In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices.
1 2
devices { filter = [ "a/sda/", "a/sdb/", "r/.*/"]
Each item in the filter array starts with a for accept or r for reject. If the storage node uses the LVM on the operating system drive, the associated system drive device must also be added to the filter. Similarly, if the compute node uses the LVM on the operating system drive, you must modify the filter in the /etc/lvm/lvm.conf file on the node to include the operating system drive. For example, if the /dev/sda device contains an operating system, add sda to the filter.
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the file.
- Install the software package.
1yum -y install openstack-cinder targetcli python-keystone
- Modify the /etc/cinder/cinder.conf configuration file.
- Open the file.
1vi /etc/cinder/cinder.conf - Press i to enter the insert mode and perform the following configurations.
- Configure database access.
1 2
[database] connection = mysql+pymysql://cinder:PASSWORD@controller/cinder
- Use RabbitMQ as the connection information of the message queue.
1 2
[DEFAULT] transport_url = rabbit://openstack:PASSWORD@controller
- Configure Identity service access.
1 2 3 4 5 6 7 8 9 10 11 12
[DEFAULT] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = PASSWORD
- Set my_ip to the IP address of the management network port on the storage node.
1 2
[DEFAULT] my_ip = 192.168.100.121
- Configure AZs for the storage node Cinder storage.
1 2 3
[DEFAULT] storage_availability_zone = names of the availability zones where the storage nodes are located default_availability_zone = names of the availability zones where the storage nodes are located
- storage_availability_zone and default_availability_zone are the names of the availability zones where the storage nodes are located.
- Example names: The names of all x86 storage nodes (x86-compute) are az-x86, and the names of all Arm storage nodes (arm-compute) are az-arm.
- In the [lvm] section, use the LVM driver, cinder-volumes volume group, iSCSI protocol, and the corresponding iSCSI service to configure the LVM backend.
1 2 3 4 5
[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = lioadm
If the [lvm] section does not exist, add it first.
- Enable the LVM backend.
1 2
[DEFAULT] enabled_backends = lvm
- Configure the location of the Image service API.
1 2
[DEFAULT] glance_api_servers = http://controller:9292
- Configure the lock path.
1 2
[oslo_concurrency] lock_path = /var/lib/cinder/tmp
- Configure database access.
- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
- Open the file.
Configuring Ceph Remote Storage Volumes (Option 2)
- Install the software package.
1yum -y install openstack-cinder targetcli python-keystone
- Modify the /etc/cinder/cinder.conf file.
- Open the file.
1vi /etc/cinder/cinder.conf - Press i to enter the insert mode and perform the following configurations.
- Configure database access.
1 2
[database] connection = mysql+pymysql://cinder:PASSWORD@controller/cinder
- Use RabbitMQ as the connection information of the message queue.
1 2
[DEFAULT] transport_url = rabbit://openstack:PASSWORD@controller
- Configure Identity service access.
1 2 3 4 5 6 7 8 9 10 11 12
[DEFAULT] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = PASSWORD
- Set my_ip to the IP address of the management network port on the storage node.
1 2
[DEFAULT] my_ip = 192.168.100.121
- Configure AZs for the storage node Cinder storage.
1 2 3
[DEFAULT] storage_availability_zone = names of the availability zones where the storage nodes are located default_availability_zone = names of the availability zones where the storage nodes are located
- storage_availability_zone and default_availability_zone are the names of the availability zones where the storage nodes are located.
- Example names: The names of all x86 storage nodes (x86-compute) are az-x86, and the names of all Arm storage nodes (arm-compute) are az-arm.
- In this [ceph] section, configure the storage pool for connecting to Ceph and the name of the AZ where the storage pool is located.
1 2 3 4 5 6 7 8 9 10 11 12 13
[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = Storage pool name rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = <UUID> storage_availability_zone =names of the availability zones where the storage nodes are located
If the [ceph] section does not exist, add it first.
UUID is the UUID generated in section 5. In this example, the UUID is b3d5fee6-839c-482e-b244-668bad7128a9.
Storage pool name is the storage pool created in section Configuring the Ceph Environment. The x86 storage nodes (x86-compute) use the x86 storage pool, which is volumes-x86 in this example. The Arm storage nodes (arm-compute) use the Arm storage pool, which is volumes-arm in this example.
Examples for names of the availability zones where the storage nodes are located: the names of all x86 storage nodes (x86-compute) are az-x86, and the names of all Arm storage nodes (arm-compute) are az-arm.
- Enable the Ceph backend.
1 2
[DEFAULT] enabled_backends = ceph
- Configure the location of the Image service API.
1 2
[DEFAULT] glance_api_servers = http://controller:9292
- Configure the lock path.
1 2
[oslo_concurrency] lock_path = /var/lib/cinder/tmp
- Configure database access.
- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
- Open the file.