Environment
Physical Networking
The physical environment of the Ceph block devices contains two network layers and three nodes. In the physical environment, the MON, MGR, MDS, and OSD nodes are deployed together. At the network layer, the public network is separated from the cluster network. The two networks use 25GE optical ports for communication.
The Ceph cluster consists of Ceph clients and Ceph servers. Figure 1 shows the networking mode.
Hardware Configuration
Table 1 shows the Ceph hardware configuration.
Server |
TaiShan 200 server (model 2280) |
---|---|
Processor |
Kunpeng 920 5230 processor |
Core |
2 x 32 cores |
CPU Frequency |
2600 MHz |
Memory Capacity |
12 x 16 GB |
Memory Frequency |
2666 MHz (8 Micron 2R memory modules) |
NIC |
IN200 NIC (4 x 25GE) |
Drive |
System drives: RAID 1 (2 x 960 GB SATA SSDs) Data drives of general-purpose storage: JBOD enabled in RAID mode (12 x 4 TB SATA HDDs) |
NVMe SSD |
Acceleration drive of general-purpose storage: 1 x 3.2 TB ES3600P V5 NVMe SSD Data drives of high-performance storage: 12 x 3.2 TB ES3600P V5 NVMe SSDs |
RAID Controller Card |
Avago SAS 3508 |
Software Versions
Table 2 lists the required software versions.
Node Information
Table 3 describes the IP network segment planning of the hosts.
Component Deployment
Table 4 describes the deployment of service components in the Ceph block device cluster.
Cluster Check
Run the ceph health command to check the cluster health status. If HEALTH_OK is displayed, the cluster is running properly.