Rate This Document
Findability
Accuracy
Completeness
Readability

Deploying Ceph

  1. Install the Ceph software and deploy the MON and MGR nodes.

    For details, see Installing the Ceph Software, Deploying MON Nodes, and Deploying MGR Nodes in the Ceph Block Storage Deployment Guide (CentOS 7.6 & openEuler 20.03).

  2. Deploy OSD nodes.

    Before performing this operation, determine which hard drives are used as data drives and ensure that all partitions on the data drives are cleared. If there are partitions that are not cleared, clear them first.

    1. Check whether each hard drive has partitions.
      1
      lsblk
      
    2. If a hard drive has partitions, clear the partitions. The following command uses the drive /dev/sdb as an example.
      1
      ceph-volume lvm zap /dev/sdb --destroy
      
    1. Create the create_osd.sh script on Ceph-Node 1 and use the 12 bcache drives on each server as OSD data drives.
      1
      2
      cd /etc/ceph
      vi /etc/ceph/create_osd.sh
      
      Add the following content:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      #!/bin/bash 
      for node in ceph1 ceph2 ceph3 
      do 
               j=7 
               k=1 
               for i in `ssh ${node} "ls /sys/block | grep bcache | head -n 6"` 
               do 
                       ceph-deploy osd create ${node} --data /dev/${i} --block-wal /dev/nvme0n1p${j} --block-db /dev/nvme0n1p${k} 
                       ((j=${j}+1)) 
                       ((k=${k}+1)) 
                       sleep 3 
               done 
               j=7 
               k=1 
               for i in `ssh ${node} "ls /sys/block | grep bcache | tail -n 6"` 
               do 
                       ceph-deploy osd create ${node} --data /dev/${i} --block-wal /dev/nvme1n1p${j} --block-db /dev/nvme1n1p${k} 
                       ((j=${j}+1)) 
                       ((k=${k}+1)) 
                       sleep 3 
               done 
      done
      
      • This script applies only to the current hardware configuration. For other hardware configurations, you need to modify the script.
      • In the ceph-deploy osd create command:
        • ${node} specifies the hostname of a node.
        • --data specifies a data drive. The back-end drive of bcache is used as a data drive.
        • --block-db specifies the DB partition.
        • --block-wal specifies the WAL partition.
      • The DB and WAL partitions are deployed on NVMe SSDs to improve write performance. If no NVMe SSD is configured or NVMe SSDs are used as data drives, you do not need to specify --block-wal. Instead, you only need to specify --data.
    2. Run the script on Ceph 1.
      1
      bash create_osd.sh
      
    3. Check whether the OSD nodes are successfully created.
      ceph -s

      If the status of all the 36 OSD nodes is up, the creation is successful.