我要评分
获取效率
正确性
完整性
易理解

Partitioning Drives

Ceph 14.2.10 uses BlueStore as the back-end storage engine. The Journal partition in the Jewel version is no longer used. Instead, the DB partition (metadata partition) and WAL partition are used. The two partitions store back-end metadata and log files generated by BlueStore. The metadata is used to improve the efficiency of the entire storage system, and the logs are used to maintain system stability.

In cluster deployment mode, each Ceph node is configured with twelve 4 TB data drives and two 3.2 TB NVMe drives. Each 4 TB data drive functions as the data drive of the bcache device. Each NVMe drive functions as the DB and WAL partitions of six OSDs and the cache drive of the bcache device. Generally, the WAL partition is sufficient if its capacity is greater than 10 GB. According to the official Ceph documents, it is recommended that the size of each DB partition be at least 4% of the capacity of each data drive and that the cache drive capacity account for 5% to 10% of the total data drive capacity. The size of each DB partition can be flexibly configured based on the NVMe drive capacity.

In this example, the WAL partition capacity is 15 GB, the DB partition capacity is 30 GB, and the cache drive capacity is 400 GB (10% of the data drive capacity).

Perform the following operations on the three Ceph nodes. The following uses two NVMe drives (/dev/nvme0n1 and /dev/nvme1n1) as an example. If the system has multiple NVMe SSDs, you only need to add the corresponding drive letters to the j parameter. If the required capacity is changed, change the number in the end=`expr $start + 30` command to the required capacity.

  1. Create the partition.sh script.
    vi partition.sh
  2. Add the following content to the file:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    #/bin/bash
    for j in {0..1}
    do
        parted -s /dev/nvme${j}n1  mklabel gpt
        start=0
    # Divide the drive into six 30 GB partitions.
        end=`expr $start + 30`
        parted /dev/nvme${j}n1 mkpart primary 2048s ${end}GiB
        start=$end
        for i in {1..5}
        do
            end=`expr $start + 30`
            parted /dev/nvme${j}n1 mkpart primary ${start}GiB ${end}GiB
            start=$end
        done
    # Divide the drive into six 15 GB partitions.
        for i in {1..6}
        do
            end=`expr $start + 15`
            parted /dev/nvme${j}n1 mkpart primary ${start}GiB ${end}GiB
            start=$end
        done
    # Divide the drive into six 400 GB partitions.
        for i in {1..6}
        do
            end=`expr $start + 400`
            parted /dev/nvme${j}n1 mkpart primary ${start}GiB ${end}GiB
            start=$end
        done
    done
    

    This script applies only to the current hardware configuration. For other hardware configurations, you need to modify the script.

  3. Run the script.
    1
    bash partition.sh
    
  4. Check whether the partitions are successfully created.
    1
    lsblk
    

    If information similar to the following is displayed, the partitions are successfully created: