Rate This Document
Findability
Accuracy
Completeness
Readability

Creating a Storage Pool

Perform operations in this section only on ceph1.

  1. Create a storage pool. The storage pool name can be customized. The following uses vdbench as an example.
    1
    ceph osd pool create vdbench 32 32
    

    • In the command, vdbench is the storage pool name.
    • The two numbers in the storage pool creation command (for example, ceph osd pool create vdbench 32 32) correspond to the PG quantity and PGP quantity of the created storage pool, respectively.
      • According to the official Ceph document, the recommended total number of storage pool PGs in a cluster is calculated as follows: (Number of OSDs x 100)/Data redundancy factor. For the replication mode, the data redundancy factor is the number of copies. For the erasure code (EC) mode, the data redundancy factor is the sum of the numbers of data blocks and parity blocks. For example, the data redundancy factor is 3 for the three-replica mode and 6 for the EC 4+2 mode.
      • Assume that the cluster has three servers and each server has 12 OSDs. The total number of OSDs is 36. In the three-replica mode, the PG quantity is 1200 (36 x 100/3). It is recommended that the PG quantity be an integral power of 2. In this case, you can set the number of PGs in the vdbench storage pool to 1024.
    • Example 1: Change the number of copies in a storage pool (for example, change the number to 2).
      1. Obtain the number of copies configured for storage pool vdbench.
        ceph osd pool get vdbench size
      2. Change the number of copies of storage pool vdbench.
        ceph osd pool set vdbench size 2
      3. Check the number of copies of vdbench again. The number is 2.
        ceph osd pool get vdbench size
        size: 2
    • Example 2: Check local EC configuration.
      1. List local EC configuration files. If no additional configuration file is manually created in the cluster, the default configuration is used, and the ceph osd pool create command creates default configuration.
        ceph osd erasure-code-profile ls

        Command output:

      2. Check the default configuration. k=2 and m=2 are displayed, k as the number of data blocks and m as that of the parity blocks.
        ceph osd erasure-code-profile get default

      3. If you need to create new EC configuration and apply it to the storage pool, refer to "Creating a Storage Pool in EC Mode" in Ceph Object Storage Deployment Guide.
    • For more information about pools, see the description in the Ceph open source community.
  2. After creating a storage pool, specify the pool type (CephFS, RBD, or RGW). The following uses block storage (RBD) as an example.
    1
    ceph osd pool application enable vdbench rbd
    

    • vdbench is the storage pool name and rbd is the storage pool type.
    • You can add --yes-i-really-mean-it to the end of the command to change the storage pool type.
  3. Optional: (Optional) Enable zlib compression for the storage pool.
    1
    2
    3
    ceph osd pool set vdbench compression_algorithm zlib
    ceph osd pool set vdbench compression_mode force
    ceph osd pool set vdbench compression_required_ratio .99
    

  4. After the preceding commands are executed successfully, check whether the storage pool is successfully created.
    ceph -s
    If the following information is displayed, the storage pool is successfully created.
    cluster:
    id:     0207ddea-2150-4509-860d-365e87420b3e
    health: HEALTH_OK
    
    services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)
    mgr: ceph3(active, since 2d), standbys: ceph2, ceph1
    osd: 1 osds: 1 up (since 25h), 1 in (since 9d)
    
    data:
    pools: 1 pools, 32 pgs
    usage: 46MiB used, 2.2 TiB / 2.2 TiB avail
    pgs: 32 active+clean