Expanding File Storage Capacity
Adding MONs to New Servers
In the /etc/ceph/ceph.conf file of ceph1, add the IP addresses of ceph4 and ceph5 to mon_initial_members and mon_host.
- Modify the ceph.conf file.
1 2 3
cd /etc/ceph/ vim ceph.conf
- Change mon_initial_members=ceph1,ceph2,ceph3 to mon_initial_members=ceph1,ceph2,ceph3,ceph4,ceph5.
- Change mon_host=192.168.3.156,192.168.3.157,192.168.3.158 to mon_host=192.168.3.156,192.168.3.157,192.168.3.158,192.168.3.197,192.168.3.198.
- Push the ceph.conf file from the ceph1 node to other nodes.
1
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 ceph4 ceph5
- Create MONs on ceph1.
1
ceph-deploy mon create ceph4 ceph5
- Check the MON status.
1
ceph mon stat
If information about the new servers is displayed in the command output, the MONs are created on the new servers.
(Optional) Deleting MONs

Deleting a MON has a great impact on the cluster. Plan the MONs in advance and avoid deleting MONs.
The following describes how to delete MONs from ceph2 and ceph3 as an example. Modify the ceph2 and ceph3 information in the /etc/ceph/ceph.conf file of ceph1 and push the ceph.conf file to other nodes.
- Modify ceph.conf.
1 2 3
cd /etc/ceph/ vim ceph.conf
Change mon_initial_members=ceph1,ceph2,ceph3,ceph4,ceph5 to mon_initial_members=ceph1,ceph4,ceph5.
Change mon_host=192.168.3.156,192.168.3.157,192.168.3.158,192.168.3.197,192.168.3.198 to mon_host=192.168.3.156,192.168.3.197,192.168.3.198.
- Push the ceph.conf file from the ceph1 node to other nodes.
1
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 ceph4 ceph5
- Delete the MONs from ceph2 and ceph3.
1
ceph-deploy mon destroy ceph2 ceph3
- Run the following command on client1 to check the key for the client to access the Ceph cluster:
1
cat /etc/ceph/ceph.client.admin.keyring
You only need to run the command once. The keys for cluster nodes and client nodes are the same.
- Run the following command on client1 to mount the root directory (the type is ceph) of ceph1 to /mnt/cephfs of client1.
1 2
umoun -t ceph /mnt/cephfs mount -t ceph 192.168.3.156:6789:/ /mnt/cephfs -o name=admin,secret=Key obtained in the previous step,sync
Deploying MGRs
Create MGRs for ceph4 and ceph5.
1
|
ceph-deploy mgr create ceph4 ceph5 |
Adding MDSs
Add MDSs.
1
|
ceph-deploy mds create ceph4 ceph5 |
Deploying OSDs
Create OSDs for the new servers (each server has 12 drives).
1 2 3 4 5 6 7 8 |
for i in {a..l} do ceph-deploy osd create ceph4 --data /dev/sd${i} done for i in {a..l} do ceph-deploy osd create ceph5 --data /dev/sd${i} done |
Configuring Storage Pools
- Query the storage pool information.
1
ceph fs ls
- Modify pg pgpnum and pgnum.
The calculation rule for PGs is as follows:
Total PGs = (Total_number_of_OSD * 100 / max_replication_count) / pool_count
Modify pgnum and pgpnum as follows:1 2 3 4
ceph osd pool set fs_metadata pg_num 256 ceph osd pool set fs_metadata pgp_num 256 ceph osd pool set fs_data pg_num 2048 ceph osd pool set fs_data pgp_num 2048
Verifying Capacity Expansion
After capacity expansion, Ceph migrates some PGs from other OSDs to the new OSDs to balance data again.
- Check whether the cluster is healthy after data migration is complete.
1
ceph -s
- Check that the storage capacity of the cluster has increased.
1
ceph osd df