我要评分
获取效率
正确性
完整性
易理解

Deploying a Ceph Cluster

  1. Create a local repository.
    1
    2
    3
    4
    5
    6
    7
    yum -y install createrepo
    mkdir /home/ceph-compaction
    cd /home/ceph-compaction
    cp /home/rpmbuild/RPMS/aarch64/*rpm ./
    createrepo ./
    cd /etc/yum.repos.d/
    vi ceph-local.repo
    
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    [local]
    name=local
    baseurl=file:///home/ceph-compaction
    enable=1
    gpgcheck=0
    [Ceph-noarch]
    name = Ceph noarch packages
    baseurl = http://download.ceph.com/rpm-nautilus/el7/noarch
    enabled = 1
    gpgcheck = 1
    type = rpm-md
    gpgkey = https://download.ceph.com/keys/release.asc
    priority = 1
    
  2. Deploy MON and MGR nodes.

    For details, see the corresponding Ceph deployment guide.

    In the deployment guide, the Ceph image source is the official Ceph image, which is a Ceph RPM package that does not contain the data compaction algorithm plugin. Therefore, you need to configure Ceph using the local repository. The data compaction algorithm supports only Ceph 14.2.8 and needs to be dynamically adjusted during deployment.

  3. Modify the Ceph configuration file ceph.conf.

    The product of osd_op_num_shards_hdd and osd_op_num_threads_per_shard_hdd is the number of threads for the OSD process to process I/O requests. The default value is 5 x 1. You can change the value to 12 x 2 to deliver the maximum performance of the data compaction algorithm.

    • The configuration items provided in this step are applicable only for the HDD scenario.
    • The modification can be dynamically adjusted after OSD nodes are deployed.
    1
    vi /etc/ceph/ceph.conf
    

    Change the default number of OSD threads.

    1
    2
    osd_op_num_shards_hdd = 12
    osd_op_num_threads_per_shard_hdd = 2
    

  4. Deploy OSD nodes.

    For details, see Deploying OSD Nodes in the Ceph Block Storage Deployment Guide (CentOS 7.6 & openEuler 20.03).