Creating Storage Pools and a File System
- Run the following commands to create storage pools on ceph1:
1 2 3
cd /root/ceph-mycluster/ ceph osd pool create fs_data 32 32 ceph osd pool create fs_metadata 8 8
- The two numbers in the storage pool creation command (for example, ceph osd pool create fs_data 32 32) correspond to the pg_num and pgp_num parameters of the created storage pool, indicating the placement group (PG) quantity and the quantity of PGs for placement purposes (PGP) respectively.
- According to the official Ceph document, the recommended total number of storage pool PGs in a cluster is calculated as follows: (Number of OSDs x 100)/Data redundancy factor. For the replication mode, the data redundancy factor is the number of copies. For the erasure code (EC) mode, the data redundancy factor is the sum of the numbers of data blocks and parity blocks. For example, the data redundancy factor is 3 for the three-replica mode and 6 for the EC 4+2 mode.
- Assume that the cluster has three servers and each server has 12 OSDs. The total number of OSDs is 36. In the three-replica mode, the PG quantity is 1200 (36 x 100/3). It is recommended that the PG quantity be an integral power of 2. The data volume of fs_data is much larger than other storage pools and therefore more PGs need to be allocated to this storage pool. In this case, you can set the PG quantity of fs_data to 1024 and the PG quantity of fs_metadata to 128 or 256.
- For more information about pools, see the description in the Ceph open source community.
- The two numbers in the storage pool creation command (for example, ceph osd pool create fs_data 32 32) correspond to the pg_num and pgp_num parameters of the created storage pool, indicating the placement group (PG) quantity and the quantity of PGs for placement purposes (PGP) respectively.
- Create a file system based on the storage pools. cephfs is the file system name, and fs_metadata and fs_data are storage pool names. Pay attention to the sequence.
1
ceph fs new cephfs fs_metadata fs_data
- (Optional) Enable zlib compression for the storage pool.
1 2 3
ceph osd pool set fs_data compression_algorithm zlib ceph osd pool set fs_data compression_mode force ceph osd pool set fs_data compression_required_ratio .99
- View the created CephFS.
1
ceph fs ls
Parent topic: Deploying Ceph File Storage