Rate This Document
Findability
Accuracy
Completeness
Readability

Environment Requirements

Hardware Requirements

In this document, the compute node and local storage node are deployed on the same server.

In this case, the hybrid deployment mode uses the minimum configuration and requires nine servers, including three servers in the OpenStack cluster, three servers in the Ceph cluster, one BMS management node, and two server nodes (one x86 server and one Arm server) for verifying bare metal instance provisioning.

  • In VM hybrid deployment, if Ceph needs to be connected, at least six servers need to be configured. Otherwise, at least three servers must be configured.
  • In BMS hybrid deployment, three servers are required: one BMS management node and two server nodes (one x86 server and one Arm server) for verifying bare metal instance provisioning.

Table 1 lists the node roles for each server.

Table 1 Hardware environment

Device Type

Hostname

Model/Configuration

Remarks

Controller node

controller

  • x86 server
  • 2 x 28-core 2.1 GHz processor, 4 x 32 GB memory, and 4 x 1.2 TB SAS HDD

This node functions as the OpenStack controller management node in the hybrid deployment scenario.

x86 compute node/

x86 network node

x86-compute

  • x86 server
  • 2 x 28-core 2.1 GHz processor, 4 x 32 GB memory, and 4 x 1.2 TB SAS HDD

This node functions as the network node in the x86 AZ, x86 compute node, and local storage node in the hybrid deployment.

Arm compute node/

Arm network node

arm-compute

  • Kunpeng server
  • 2 x Kunpeng 920 5250 processor, 4 x 32 GB memory, 4 x 1.2 TB SAS HDD, 1 x Avago 3508 RAID controller card, and 1 x 1822 NIC

This node functions as the Arm AZ network node, Arm compute node, and local storage node in the hybrid deployment.

BMS management node

baremetal

  • x86 server
  • 2 x 28-core 2.1 GHz processor, 4 x 32 GB memory, and 4 x 1.2 TB SAS HDD

This node is the BMS management node and is responsible for managing and provisioning x86 and Arm bare metal instances.

Ceph node 1

ceph1

  • Kunpeng server
  • 2 x Kunpeng 920 5250 processor, 4 x 32 GB memory, 4 x 1.2 TB SAS HDD, 1 x Avago 3508 RAID controller card, and 1 x 1822 NIC

Ceph cluster node 1, Ceph cluster Manager (MGR) node, storage node, and Monitor node

Ceph node 2

ceph2

  • Kunpeng server
  • 2 x Kunpeng 920 5250 processor, 4 x 32 GB memory, 4 x 1.2 TB SAS HDD, 1 x Avago 3508 RAID controller card, and 1 x 1822 NIC

Ceph cluster node 2, Ceph cluster storage node, and Monitor node

Ceph node 3

ceph3

  • Kunpeng server
  • 2 x Kunpeng 920 5250 processor, 4 x 32 GB memory, 4 x 1.2 TB SAS HDD, 1 x Avago 3508 RAID controller card, and 1 x 1822 NIC

Ceph cluster node 3, Ceph cluster storage node, and Monitor node

x86 bare metal instance nodes

-

x86 server

-

Arm bare metal instance nodes

-

Kunpeng server

-

Software Environment

Table 2 lists the software versions used in the hybrid deployment.

Table 2 Software version list

Software

Version

How to Obtain

Installation Guide

OS

CentOS 7.6

Download the software from the official CentOS website.

Download URL: https://www.centos.org/download/

-

OpenStack

Stein

Perform automatic installation using the Yum source.

Hybrid Deployment of OpenStack and Installing and Deploying the OpenStack Bare Metal Services in this document

Ceph

14.2.1

Perform automatic installation using the Yum source.

Ceph Block Storage Deployment Guide (CentOS 7.6 & openEuler 20.03)

Cluster Environment

In this document, the OpenStack+Ceph VM cluster and BMS cluster are deployed on nine servers. Three servers are used to create the Ceph cluster, three servers are used as the OpenStack environment and Ceph client nodes, one server is used as the BMS management node, and two servers are used as bare metal instance nodes.

  • Hybrid Deployment of VMs

    The controller node manages the entire OpenStack cluster and is the entry for all management operations. In the hybrid deployment scenario, the x86-compute node functions as both the network node in the x86 AZ and the x86 compute node. This node provides network functions for all x86 compute nodes in the x86 AZ. The arm-compute node functions as both the network node in the Arm AZ and the Arm compute node in the hybrid deployment scenario. This node provides network functions for all Arm compute nodes in the Arm AZ.

    Three Ceph nodes (ceph1, ceph2, and ceph3) provide backend block storage for the OpenStack cluster in the hybrid deployment. Storage pools are created to provide storage services for different AZs.

  • Hybrid Deployment of BMSs

    The controller node manages the entire OpenStack cluster and is the entry for all OpenStack service management operations. The baremetal node is the entry for all bare metal service management operations. The BMS reuses the network service for VM hybrid deployment to install and deploy bare metal instances.

For details about the networking and IP address configuration, see Figure 1 and Table 3. Set IP addresses based on actual requirements.

Figure 1 Cluster networking
Table 3 Cluster environment IP address planning

Node

NIC Name/

OpenStack Management IP Address

NIC Name/Tenant Network

Description

controller

eno3

192.168.100.120

-

Controller node and Ceph client node in the hybrid deployment

x86-compute

eno3

192.168.100.121

enp64s0

x86 AZ network node, compute node, and Ceph client node in the hybrid deployment

arm-compute

eno3

192.168.100.122

enp64s0

Arm AZ network node, compute node, and Ceph client node in the hybrid deployment

baremetal

eno3

192.168.100.100

enp64s0

192.168.101.2

  • BMS management node is an OpenStack compute node.
  • The IP address of eno3 network port is used for communication between OpenStack and other management services such as Keystone and MySQL.
  • The IP address of port enp64s0 is the service IP address of the baremetal node and is used for bare metal server provisioning.

ceph1

eno3

192.168.100.123

-

Ceph storage node

ceph2

eno3

192.168.100.124

-

Ceph storage node

ceph3

eno3

192.168.100.125

-

Ceph storage node