Deploying MONs
The Monitor (MON) monitors the status of the Ceph cluster. You only need to perform MON deployment operations on the primary node (ceph1 is used as an example).
- Create a Ceph cluster. The following uses ceph1, ceph2, and ceph3 as an example.
1 2
cd /etc/ceph ceph-deploy new ceph1 ceph2 ceph3

- Configure global parameters and MON parameters for the Ceph cluster.
Operations such as configuring nodes and using ceph-deploy to configure OSDs need to be performed in the /etc/ceph directory. Otherwise, an error may occur.
- Open the ceph.conf file that is automatically generated in the /etc/ceph directory.
1vi /etc/ceph/ceph.conf - Press i to enter the insert mode and change the content in ceph.conf to the following information (use the latest fsid):
1 2 3 4 5 6 7 8 9 10 11 12 13
[global] fsid = f5a4f55c-d25b-4339-a1ab-0fceb4a2996f mon_initial_members = ceph1, ceph2, ceph3 mon_host = 192.168.3.166,192.168.3.167,192.168.3.168 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network = 192.168.65.0/24 cluster_network = 192.168.66.0/24 [mon] mon_allow_pool_delete = true
For a single-node environment, add the following content below [global]:
1 2
osd_pool_default_size = 1 osd_pool_default_min_size = 1
- Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
In Ceph 14.2.8, when the BlueStore engine is used, the buffer of the BlueFS is enabled by default. As a result, the system memory may be fully occupied by the buffer or cache, causing performance deterioration. You can use either of the following methods to solve the problem:- If the cluster load is not heavy, set bluefs_buffered_io to false.
- Periodically run the following command to forcibly reclaim the memory occupied by the buffer or cache:
echo 3 > /proc/sys/vm/drop_caches
- Open the ceph.conf file that is automatically generated in the /etc/ceph directory.
- Initialize the monitor and collect keys.
1ceph-deploy mon create-initial

- Copy ceph.client.admin.keyring generated after 3 is successfully performed to each node.
1ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 client1 client2 client3

- Check the Ceph cluster status to determine whether the MONs are successfully configured.
1ceph -sExpected result of successful configuration:
1 2 3 4 5
cluster: id: f6b3c38c-7241-44b3-b433-52e276dd53c6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)
If MONs fails to be generated, the permission configuration may be incorrect. In this case, you need to configure the Ceph group and users./usr/sbin/groupadd ceph -g 167 -o -r 2>/dev/null || :/usr/sbin/useradd ceph -u 167 -o -r -g ceph -s /sbin/nologin -c "Ceph daemons" -d /var/lib/ceph 2>/dev/null || :
Parent topic: Deploying Ceph