Rate This Document
Findability
Accuracy
Completeness
Readability

Deploying MON Nodes

Perform operations in this section only on ceph1.

  1. Create a cluster.
    1
    2
    cd /etc/ceph
    ceph-deploy new ceph1 ceph2 ceph3 
    

  2. Open the ceph.conf file that is automatically generated in /etc/ceph.
    1
    vi /etc/ceph/ceph.conf 
    

    Modify the ceph.conf file as follows:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    [global]
    fsid = f5a4f55c-d25b-4339-a1ab-0fceb4a2996f
    mon_initial_members = ceph1, ceph2, ceph3
    mon_host = 192.168.3.166,192.168.3.167,192.168.3.168
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    
    public_network = 192.168.3.0/24
    cluster_network = 192.168.4.0/24
    
    bluestore_prefer_deferred_size_hdd = 0
    rbd_op_threads=16 # Specifies the number of RBD tp threads.
    osd_memory_target = 2147483648 # Limits the OSD memory.
    bluestore_default_buffered_read = false # Determines whether to cache data based on the flag after data is read.
    [mon]
    mon_allow_pool_delete = true
    
    For a single-node environment, add the following content below [global]:
    1
    2
    osd_pool_default_size = 1
    osd_pool_default_min_size = 1
    
    Table 1 Parameter description

    Parameter

    Description

    Tuning Suggestion

    rbd_op_threads

    Maximum number of threads supported by a block device

    16

    osd_memory_target

    Maximum memory size that can be used by the OSD

    2147483648

    bluestore_default_buffered_read

    BlueStore read buffer switch

    false

    • Run the previous commands for configuring nodes and use ceph-deploy to configure Object Storage Daemon (OSD) in the /etc/ceph directory. Otherwise, an error may occur.
    • The modification is to isolate the internal cluster network from the external access network. 192.168.4.0 is used for data synchronization between internal storage nodes, and 192.168.3.0 is used for data exchange between storage nodes and compute nodes.
    • You are advised to configure the public and cluster network in the 3.0 network segment for better Global Cache performance.
    • In Ceph 14.2.8, when the BlueStore engine is used, the buffer of the BlueFS is enabled by default. As a result, the system memory may be fully occupied by the buffer or cache, causing performance deterioration. You can use either of the following methods to solve the problem:
      • If the cluster load is not heavy, set bluefs_buffered_io to false.
      • Run the echo 3 > /proc/sys/vm/drop_caches command periodically to forcibly reclaim the memory in the buffer or cache.
  3. Initialize the monitor and collect the keys.
    1
    ceph-deploy mon create-initial 
    

  4. Copy ceph.client.admin.keyring generated after step 3 is successfully performed to each node.
    1
    ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3 client1 client2 client3 
    

  5. Check whether the configuration is successful.
    1
    ceph -s
    

    See the following information:

    1
    2
    3
    4
    5
    cluster: 
    id:     f6b3c38c-7241-44b3-b433-52e276dd53c6 
    health: HEALTH_OK  
    services: 
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 25h)