Deploying MON Nodes
Perform the following operations on ceph1 to deploy Monitor (MON) nodes.
- Create a cluster.
1 2
cd /etc/ceph ceph-deploy new ceph1 ceph2 ceph3
The command output is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /home/xiaoshuang/0808/kunpeng_perfstudiokit/venv/bin/ceph-deploy new ceph1 ceph2 ceph3 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] mon : ['ceph1', 'ceph2', 'ceph3'] [ceph_deploy.cli][INFO ] ssh_copykey : True [ceph_deploy.cli][INFO ] fsid : None [ceph_deploy.cli][INFO ] cluster_network : None [ceph_deploy.cli][INFO ] public_network : None [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf object at 0xfffd10452890> [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] func : <function new at 0xfffd1044e830> [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [ceph1][DEBUG ] connected to host: ceph1
- Modify the related configurations.Isolate the internal network between cluster nodes from the public network. Configure 192.168.4.0 for data synchronization in the internal storage cluster (server nodes) and 192.168.3.0 for data exchange between server nodes and client nodes.
- Open the ceph.conf file that is automatically generated in the /etc/ceph directory.
Operations such as configuring nodes and using ceph-deploy to configure OSDs need to be performed in the /etc/ceph directory. Otherwise, an error may occur.
1vi /etc/ceph/ceph.conf
- Press i to enter the insert mode and modify the file as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13
[global] fsid = f5a4f55c-d25b-4339-a1ab-0fceb4a2996f mon_initial_members = ceph1, ceph2, ceph3 mon_host = 192.168.3.166,192.168.3.167,192.168.3.168 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx public_network = 192.168.3.0/24 cluster_network = 192.168.4.0/24 [mon] mon_allow_pool_delete = true
In Ceph 14.2.8, when the BlueStore engine is used, the buffer of the BlueFS is enabled by default. As a result, the system memory may be fully occupied by the buffer or cache, causing performance deterioration. You can use either of the following methods to solve the problem:- If the cluster load is not heavy, set bluefs_buffered_io to false.
- Run the echo 3 > /proc/sys/vm/drop_caches command periodically to forcibly reclaim the memory in the buffer or cache.
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Open the ceph.conf file that is automatically generated in the /etc/ceph directory.
- Initialize the monitors and collect keys.
1ceph-deploy mon create-initial
After the execution is complete, the script automatically generates ceph.client.admin.keyring. The command output is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : create-initial [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xfffcbf407f00> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : <function mon at 0xfffcbf3e75d0> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] keyrings : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1 ceph2 ceph3 [ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ... [ceph1][DEBUG ] connected to host: ceph1 [ceph1][DEBUG ] detect platform information from remote host [ceph1][DEBUG ] detect machine type [ceph1][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: openEuler 20.03 LTS-SP1 [ceph1][DEBUG ] determining if provided host has same hostname in remote [ceph1][DEBUG ] get remote short hostname [ceph1][DEBUG ] deploying mon to ceph1 [ceph1][DEBUG ] get remote short hostname [ceph1][DEBUG ] remote hostname: ceph1 [ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph1][DEBUG ] create the mon path if it does not exist
- Copy ceph.client.admin.keyring to each node.
1ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 client1 client2 client3
The command is successfully executed if the following information is displayed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 client1 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : True [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xfffd01a83f50> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] client : ['ceph1', 'ceph2', 'ceph3', 'client1'] [ceph_deploy.cli][INFO ] func : <function admin at 0xfffd01c495d0> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph1 [ceph1][DEBUG ] connected to host: ceph1 [ceph1][DEBUG ] detect platform information from remote host [ceph1][DEBUG ] detect machine type [ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph2 [ceph2][DEBUG ] connected to host: ceph2
- Check whether the configuration is successful.
1ceph -sThe configuration is successful if the following information is displayed:
1 2 3 4 5
cluster: id: f6b3c38c-7241-44b3-b433-52e276dd53c6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)
Parent topic: Deploying a Ceph Cluster