Environment Requirements
Hardware Requirements
In this document, the compute node and local storage node are deployed on the same server.
In this case, the hybrid deployment mode uses the minimum configuration and requires nine servers, including three servers in the OpenStack cluster, three servers in the Ceph cluster, one BMS management node, and two server nodes (one x86 server and one Arm server) for verifying bare metal instance provisioning.
- In VM hybrid deployment, if Ceph needs to be connected, at least six servers need to be configured. Otherwise, at least three servers must be configured.
- In BMS hybrid deployment, three servers are required: one BMS management node and two server nodes (one x86 server and one Arm server) for verifying bare metal instance provisioning.
Table 1 lists the node roles for each server.
Device Type |
Hostname |
Model/Configuration |
Remarks |
|---|---|---|---|
Controller node |
controller |
|
This node functions as the OpenStack controller management node in the hybrid deployment scenario. |
x86 compute node/ x86 network node |
x86-compute |
|
This node functions as the network node in the x86 AZ, x86 compute node, and local storage node in the hybrid deployment. |
Arm compute node/ Arm network node |
arm-compute |
|
This node functions as the Arm AZ network node, Arm compute node, and local storage node in the hybrid deployment. |
BMS management node |
baremetal |
|
This node is the BMS management node and is responsible for managing and provisioning x86 and Arm bare metal instances. |
Ceph node 1 |
ceph1 |
|
Ceph cluster node 1, Ceph cluster Manager (MGR) node, storage node, and Monitor node |
Ceph node 2 |
ceph2 |
|
Ceph cluster node 2, Ceph cluster storage node, and Monitor node |
Ceph node 3 |
ceph3 |
|
Ceph cluster node 3, Ceph cluster storage node, and Monitor node |
x86 bare metal instance nodes |
- |
x86 server |
- |
Arm bare metal instance nodes |
- |
Kunpeng server |
- |
Software Environment
Table 2 lists the software versions used in the hybrid deployment.
Software |
Version |
How to Obtain |
Installation Guide |
|---|---|---|---|
OS |
CentOS 7.6 |
Download the software from the official CentOS website. |
- |
OpenStack |
Stein |
Perform automatic installation using the Yum source. |
Hybrid Deployment of OpenStack and Installing and Deploying the OpenStack Bare Metal Services in this document |
Ceph |
14.2.1 |
Perform automatic installation using the Yum source. |
Ceph Block Storage Deployment Guide (CentOS 7.6 & openEuler 20.03) |
Cluster Environment
In this document, the OpenStack+Ceph VM cluster and BMS cluster are deployed on nine servers. Three servers are used to create the Ceph cluster, three servers are used as the OpenStack environment and Ceph client nodes, one server is used as the BMS management node, and two servers are used as bare metal instance nodes.
- Hybrid Deployment of VMs
The controller node manages the entire OpenStack cluster and is the entry for all management operations. In the hybrid deployment scenario, the x86-compute node functions as both the network node in the x86 AZ and the x86 compute node. This node provides network functions for all x86 compute nodes in the x86 AZ. The arm-compute node functions as both the network node in the Arm AZ and the Arm compute node in the hybrid deployment scenario. This node provides network functions for all Arm compute nodes in the Arm AZ.
Three Ceph nodes (ceph1, ceph2, and ceph3) provide backend block storage for the OpenStack cluster in the hybrid deployment. Storage pools are created to provide storage services for different AZs.
- Hybrid Deployment of BMSs
The controller node manages the entire OpenStack cluster and is the entry for all OpenStack service management operations. The baremetal node is the entry for all bare metal service management operations. The BMS reuses the network service for VM hybrid deployment to install and deploy bare metal instances.
For details about the networking and IP address configuration, see Figure 1 and Table 3. Set IP addresses based on actual requirements.
Node |
NIC Name/ OpenStack Management IP Address |
NIC Name/Tenant Network |
Description |
|---|---|---|---|
controller |
eno3 192.168.100.120 |
- |
Controller node and Ceph client node in the hybrid deployment |
x86-compute |
eno3 192.168.100.121 |
enp64s0 |
x86 AZ network node, compute node, and Ceph client node in the hybrid deployment |
arm-compute |
eno3 192.168.100.122 |
enp64s0 |
Arm AZ network node, compute node, and Ceph client node in the hybrid deployment |
baremetal |
eno3 192.168.100.100 |
enp64s0 192.168.101.2 |
|
ceph1 |
eno3 192.168.100.123 |
- |
Ceph storage node |
ceph2 |
eno3 192.168.100.124 |
- |
Ceph storage node |
ceph3 |
eno3 192.168.100.125 |
- |
Ceph storage node |
