鲲鹏社区首页
中文
注册
我要评分
文档获取效率
文档正确性
内容完整性
文档易理解
在线提单
论坛求助

Expanding Block Storage Capacity

Adding MONs to New Servers

In the /etc/ceph/ceph.conf file of ceph1, add the IP addresses of ceph4 and ceph5 to mon_initial_members and mon_host.

  1. Modify the ceph.conf file.
    1
    2
    3
    cd
    /etc/ceph/
    vim ceph.conf
    
    1. Change mon_initial_members=ceph1,ceph2,ceph3 to mon_initial_members=ceph1,ceph2,ceph3,ceph4,ceph5.
    2. Change mon_host=192.168.3.156,192.168.3.157,192.168.3.158 to mon_host=192.168.3.156,192.168.3.157,192.168.3.158,192.168.3.197,192.168.3.198.

  2. Push the ceph.conf file from the ceph1 node to other nodes.
    1
    ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 ceph4 ceph5
    
  3. Create MONs on ceph1.
    1
    ceph-deploy mon create ceph4 ceph5
    
  4. Check the MON status.
    1
    ceph mon stat
    

    If information about the new servers is displayed in the command output, the MONs are created on the new servers.

(Optional) Deleting MONs

Deleting a MON has a great impact on the cluster. Plan the MONs in advance and avoid deleting MONs.

The following describes how to delete MONs from ceph2 and ceph3 as an example. Modify the ceph2 and ceph3 information in the /etc/ceph/ceph.conf file of ceph1 and push the ceph.conf file to other nodes.

  1. Modify ceph.conf.
    1
    2
    3
    cd
    /etc/ceph/
    vim ceph.conf
    

    Change mon_initial_members=ceph1,ceph2,ceph3,ceph4,ceph5 to mon_initial_members=ceph1,ceph4,ceph5.

    Change mon_host=192.168.3.156,192.168.3.157,192.168.3.158,192.168.3.197,192.168.3.198 to mon_host=192.168.3.156,192.168.3.197,192.168.3.198.

  2. Push the ceph.conf file from the ceph1 node to other nodes.
    1
    ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 ceph4 ceph5
    
  3. Delete the MONs from ceph2 and ceph3.
    1
    ceph-deploy mon destroy ceph2 ceph3
    

Deploying MGRs

Create MGRs for ceph4 and ceph5.

1
ceph-deploy mgr create ceph4 ceph5

Deploying OSDs

Create OSDs for the new servers (each server has 12 drives).

1
2
3
4
5
6
7
8
for i in {a..l}
do
ceph-deploy osd create ceph4 --data /dev/sd${i}
done
for i in {a..l}
do
ceph-deploy osd create ceph5 --data /dev/sd${i}
done

Configuring Storage Pools

  1. Query the storage pool information.
    1
    ceph osd lspools
    

  2. Modify pgnum and pgpnum.

    The calculation rule for PGs is as follows:

    Total PGs = (Total_number_of_OSD * 100 / max_replication_count) / pool_count

    Modify pgnum and pgpnum as follows:
    1
    2
    ceph osd pool set poolname pg_num 2048
    ceph osd pool set poolname pgp_num 2048
    

Adding RBDs

  1. Query the storage pool information.
    1
    ceph osd lspools
    

  2. Create images on ceph1.

    Create five RBDs in the RBD storage pool (the size of each RBD is 200 GB). The script is as follows:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    #!/bin/bash
    pool="vdbench"
    size="204800"
    createimages()
    {
    for image in {0..4}
    do
    rbd create image${image} --size ${size} --pool ${pool} --image-format 2 --image-feature layering
    sleep 1
    done
    }
    createimage
    
  3. Check the storage pool images.
    1
    rbd ls vdbench
    

Verifying Capacity Expansion

After capacity expansion, Ceph migrates some PGs from other OSDs to the new OSDs to balance data again.

  1. Check whether the cluster is healthy after data migration is complete.
    1
    ceph -s
    

  2. Check that the storage capacity of the cluster has increased.
    1
    ceph osd df