Deploying OSD Nodes
Perform the following operations on the specified nodes to deploy object storage device (OSD) nodes.
Creating OSD Partitions
Perform the following operations on ceph1 to ceph3. The following uses /dev/nvme0n1 as an example. If there are multiple
For non-recommended configurations, if the space of the DB partition and WAL partition of the NVMe SSD is insufficient, data will be stored in an HDD, affecting performance.
In the following steps, the NVMe SSD is divided into twelve 60 GB partitions and twelve 180 GB partitions, which correspond to the WAL and DB partitions respectively.
- Create a partitioning script.
- Create a partition.sh file.
1vi partition.sh - Press i to enter the insert mode and add the following content:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
#!/bin/bash parted /dev/nvme0n1 mklabel gpt for j in `seq 1 12` do ((b = $(( $j * 8 )))) ((a = $(( $b - 8 )))) ((c = $(( $b - 6 )))) str="%" echo $a echo $b echo $c parted /dev/nvme0n1 mkpart primary ${a}${str} ${c}${str} parted /dev/nvme0n1 mkpart primary ${c}${str} ${b}${str} done
This script applies only to the current hardware configuration. For other hardware configurations, you need to modify the script.
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Create a partition.sh file.
- Run the script to create partitions.
1bash partition.sh
Deploying OSD Nodes
In the following script, the 12 drives /dev/sda to /dev/sdl are data drives, and the OS is installed on /dev/sdm. However, if the data drives are not numbered consecutively, for example, the OS is installed on /dev/sde, you cannot run the script directly. Otherwise, an error will be reported during the deployment on /dev/sde. Instead, you need to modify the script to ensure that only data drives are operated and other drives such as the system drive and SSD drive for DB and WAL partitions are not operated.
- Check the drive letter of each drive on each node.
1lsblk
The following information indicates that /dev/sda is the system drive.
1 2 3 4 5 6 7 8
[root@client1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 3.7T 0 disk ├─sda1 8:1 0 1G 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 3.7T 0 part ├─openeuler-root 253:0 0 3.6T 0 lvm / └─openeuler-swap 253:1 0 4G 0 lvm [SWAP]
The drives that were ever used as system drives and data drives in a Ceph cluster may have residual partitions. You can run the lsblk command to check for the drive partitions. For example, if /dev/sdb has partitions, run the following command to clear the partitions:
1ceph-volume lvm zap /dev/sdb --destroy
You must determine the data drives first, and then run the destroy command only when the data drives have residual partitions.
- Create an OSD deployment script on ceph1.
- Create a create_osd.sh file.
cd /etc/ceph/ vi /etc/ceph/create_osd.sh
- Press i to enter the insert mode and add the following content:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
#!/bin/bash for node in ceph1 ceph2 ceph3 do j=1 k=2 for i in {a..l} do ceph-deploy osd create ${node} --data /dev/sd${i} --block-wal /dev/nvme0n1p${j} --block-db /dev/nvme0n1p${k} ((j=${j}+2)) ((k=${k}+2)) sleep 3 done done
This script applies only to the current hardware configuration. For other hardware configurations, you need to modify the script.
In the ceph-deploy osd create command:- ${node} specifies the host name of the node.
- --data specifies the data drive.
- --block-wal specifies the WAL partition.
- --block-db specifies the DB partition.
DB and WAL partitions are usually deployed on NVMe SSDs to improve write performance. If no NVMe SSD is configured or NVMe SSDs are used as data drives, you do not need to specify --block-db or --block-wal. You only need to specify --data.
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Create a create_osd.sh file.
- Run the script on ceph1.
1bash create_osd.sh - Check whether all the 36 OSD nodes are in the up state.
1ceph -s