Adding OSDs
In this section, NVMe SSDs are divided into twelve 60 GB partitions and twelve 180 GB partitions, which correspond to the WAL and DB partitions respectively.
- Create a partition.sh script. (Skip this step if partitioning is not required.)
1vi partition.sh - Add the following content to the script (assume that a single NVMe SDD is divided into 12 partitions). Skip this step if partitioning is not required.
#!/bin/bash parted /dev/nvme0n1 mklabel gpt for j in `seq 1 12` do ((b = $(( $j * 8 )))) ((a = $(( $b - 8 )))) ((c = $(( $b - 6 )))) str="%" echo $a echo $b echo $c parted /dev/nvme0n1 mkpart primary ${a}${str} ${c}${str} parted /dev/nvme0n1 mkpart primary ${c}${str} ${b}${str} done - Run the script. (Skip this step if partitioning is not required.)
bash partition.sh
- Create a create_osd.sh script on ceph1 and deploy OSDs on the 12 drive partitions on each server.
vi /etc/ceph/create_osd.sh
- Add the following content to the script:
#!/bin/bash for node in ceph1 ceph2 ceph3 do for i in {0..7} do ceph-deploy osd create ${node} --data /dev/nvme${i}n1 done done - Run the script on ceph1.
bash create_osd.sh
- Check whether the OSDs are created.
ceph -s

Parent topic: Deploying Ceph