Rate This Document
Findability
Accuracy
Completeness
Readability

Booting a Cluster on ceph1

Create a cluster configuration file on ceph1 to boot a Ceph container cluster and manage all nodes.

  1. Create a default configuration ceph.conf.
    1. Open the ceph.conf file.
      1
      2
      cd /home
      vim ceph.conf
      
    2. Press i to enter the insert mode and add the following content to the file:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      [global]
      mon_allow_pool_delete = true
      osd_pool_default_size = 3
      osd_pool_default_min_size = 2
      
      osd_pg_object_context_cache_count = 256
      
      bluestore_kv_sync_thread_polling = true
      bluestore_kv_finalize_thread_polling = true
      
      osd_min_pg_log_entries = 10
      osd_max_pg_log_entries = 10
      osd_pool_default_pg_autoscale_mode = off
      
      bluestore_cache_size_ssd = 18G
      
      osd_memory_target = 20G # Limits the OSD memory.
      
      bluestore_block_db_path = ""
      bluestore_block_db_size = 0
      bluestore_block_wal_path = ""
      bluestore_block_wal_size = 0
      
      bluestore_rocksdb_options = use_direct_reads=true,use_direct_io_for_flush_and_compaction=true,compression=kNoCompression,max_write_buffer_number=128,min_write_buffer_number_to_merge=32,recycle_log_file_num=64,compaction_style=kCompactionStyleLevel,write_buffer_size=4M,target_file_size_base=4M,max_background_compactions=2,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=2,max_bytes_for_level_multiplier=8,flusher_threads=2
      

      For details about Ceph tuning configurations, see "Ceph Tuning" in Ceph Object Storage Tuning Guide.

    3. Press Esc to exit the insert mode. Type :wq! and press Enter to save the file and exit.
  2. Boot a Ceph cluster.
    1
    cephadm bootstrap -c ceph.conf --mon-ip 192.168.3.166 --cluster-network 192.168.4.0/24 --skip-monitoring-stack
    
    • --mon-ip: IP address of the frontend public network
    • --cluster-network: IP address of the backend cluster network
    • -c ceph.conf: Optional. It can be used to change the default Ceph configuration.

  3. Copy the public key to other nodes.
    1
    2
    ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph2
    ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph3
    
  4. Synchronize the local repository configuration to other nodes.
    1
    2
    scp /etc/containers/registries.conf ceph2:/etc/containers/
    scp /etc/containers/registries.conf ceph3:/etc/containers/
    
  5. Access the Ceph cluster container.
    1
    cephadm shell
    
  6. Add the other two host nodes to the cluster.
    1
    2
    ceph orch host add ceph2 --labels _admin
    ceph orch host add ceph3 --labels _admin
    

    Wait for 3 to 5 minutes after the commands are executed.

  7. Check whether the hosts are added.
    1
    ceph orch host ls
    

  8. Check the cluster status and ensure that the other two nodes are added to the cluster.
    1
    ceph -s