Rate This Document
Findability
Accuracy
Completeness
Readability

Ceph Code Optimization

Code optimizations include EC Turbo and some common basic optimizations in Ceph, such as Crush algorithm acceleration and RocksDB CRC algorithm optimization.

After EC Turbo and the basic optimizations are used, the read/write performance is significantly improved. You can experience both EC Turbo and these optimizations by applying a single patch. The following describes the compilation and deployment procedure.

  1. Obtain the installation package. For details, see "Obtaining the EC Turbo Installation Package" in EC Turbo Feature Guide.
  2. Obtain Ceph source code.
    1. Download the ceph-14.2.10 source package.
    2. Save the source package in the /home directory and decompress it.
      1
      2
      cd /home
      tar -zxvf ceph-14.2.10.tar.gz
      
  3. Apply the Ceph EC optimization patch.
    1. Download the patch and save it to /home/ceph-14.2.10.
    2. Back up the original ceph.spec file.
      1
      2
      cd /home/ceph-14.2.10
      mv ceph.spec ceph.spec.bak
      
    3. Apply the patch downloaded in 1. After the patch is applied, a new ceph.spec file is generated.
      1
      patch -p1 < ceph-ecturbo-optimization.patch
      
  4. Build the liboath-devel package. For details, see "Building the liboath-devel Package" in the EC Turbo Feature Guide.
  5. Prepare the Ceph compilation environment. For details, see "Preparing the Compilation Environment" in the EC Turbo Feature Guide.

    The optimizations require the hugepage feature. After running the install-deps.sh script (which does not install the hugepage dependency), you need to run the following command to install the dependency:

    1
    yum install libhugetlbfs
    
  6. Compile and verify Ceph. For details, see "Compiling and Verifying Ceph".
  7. Create a Ceph RPM package. For details, see "Creating a Ceph RPM Package".
  8. Deploy a Ceph cluster. For details, see "Deploying a Ceph Cluster".

    In this environment, all data drives are NVMe drives. Therefore, you do not need to create OSD partitions when deploying OSD nodes. You can directly use NVMe drives as OSD data drives. To maximize hardware performance, the optimal number of OSD nodes deployed on each server is 24. You are advised to divide each NVMe drive into three partitions and then deploy OSD nodes.