Rate This Document
Findability
Accuracy
Completeness
Readability

Adding OSDs

Object Storage Daemon (OSD) is a Ceph cluster data management service. To add an OSD, the following conditions must be met:

  • The device must have no partition.
  • The device cannot have any LVM status.
  • Do not mount the device.
  • The device must not contain a file system.
  • The device cannot contain Ceph BlueStore OSDs.
  • The device size must be greater than 5 GB.
  1. Check available system drives.
    1
    ceph orch device ls --wide --refresh
    

  2. Add OSDs.
    Method 1: Add all OSDs that meet the conditions.
    ceph orch apply osd --all-available-devices
    Method 2: Manually add OSDs. (NVMe SSDs on ceph1 are used as an example.)
    ceph orch daemon add osd ceph1:/dev/nvme0n1
    for node in {1..3};do for i in {0..7};do ceph orch daemon add osd ceph${node}:/dev/nvme${i}n1;done;done
    Method 3: Deploy OSDs in an advanced mode.

    You can use the .yaml configuration file to start a service for deploying OSDs. This mode has the following advantages:

    • You can specify devices.
    • Multiple OSDs can be deployed on one drive.
    • You can restart OSDs through the service.
    1. Create an osd_spec.yaml file to specify available SSDs.
      vi osd_spec.yaml
      Add the following content to the file. The following are examples of two common files:
      Specifying available SSDs
      service_type: osd
      service_id: x18_bluestore
      placement:
        hosts: # Set it to the actual node names.
          - node1
          - node2
          - node3
      osds_per_device: 1 # Number of OSDs booted by an SSD
      #unmanaged: True         
      spec:
        data_devices:
      paths: # Fill in available drives obtained by running the ceph orch device ls command.
      - /dev/nvme0n3
      - /dev/nvme0n3
      - /dev/nvme0n3
      - /dev/nvme0n3
      Filtering available drives by SSD model and specifying the number of OSDs that can be booted on a drive
      service_type: osd
      service_id: osd_nvme_1.5T
      placement:
      #  host_pattern: '*'
        hosts: # Set it to the actual node names.
          - node1
          - node2
          - node3
      osds_per_device: 1 # Number of OSDs booted by an SSD
      #unmanaged: True         
      spec:
        data_devices:
          model: HWE56P431T6M002N # Run the ceph-volume inventory command to obtain the SSD model.
          limit: 1 # Only one drive of the model can be used.
    2. Start the OSD service.
      ceph orch apply -i osd_spec.yaml
  3. Check the cluster status.
    ceph -s