Deploying the Server Software
Deploying ZooKeeper Clusters
Two ZooKeeper clusters need to be deployed: bcm-zk and ccm-zk. bcm-zk stores acceleration node configuration information, and ccm-zk stores Global Cache cluster information.
Notes:
- If the fault domain is set to rack in the gcache.conf file, deploy nodes in the ZooKeeper cluster on different racks to prevent the entire ZooKeeper cluster from being unavailable when a rack is powered off.
- The value of zk_server_list in the gcache.conf and bcm.xml files must be the same as the actual cluster configuration. If not, the following symptoms may occur:
- The format of zk_server_list is "host1:port1,host2:port2,host3:port3". If the IP address of a host cannot be identified, the connection fails, as shown in the following figure:

- If all hosts can be identified but the port configuration is incorrect or no znode is deployed on the hosts, the connection can be successful only after several retries.
- The server provides a configuration file to configure service listening ports. The default server listening ports are 7880 and 7881. The port range can be extended to 7880–7889.
- The format of zk_server_list is "host1:port1,host2:port2,host3:port3". If the IP address of a host cannot be identified, the connection fails, as shown in the following figure:
- CCM-ZK Deployment
- Deploy the CCM ZooKeeper.
- Go to the zkData directory, create a myid file and modify it.
1 2
cd /opt/apache-zookeeper-3.6.3-bin && mkdir zkData cd zkData && echo 1 > myid
On the ceph1 node, set myid to 1. On ceph2 and ceph3, set myid to 2 and 3 respectively.
- Go to the conf directory and modify the zoo.cfg file. Set the values of dataDir and server ports to actual ones.
1 2 3
cd /opt/apache-zookeeper-3.6.3-bin/conf mv zoo_sample.cfg zoo.cfg vi zoo.cfg
1 2 3 4 5 6 7 8 9 10 11 12
tickTime=1000 initLimit=10 syncLimit=2 dataDir=/opt/apache-zookeeper-3.6.3-bin/zkData clientPort=2181 server.1=ceph1:2888:3888;2181 server.2=ceph2:2888:3888;2181 server.3=ceph3:2888:3888;2181 autopurge.purgeInterval=3 autopurge.snapRetainCount=3 maxClientCnxns=333 4lw.commands.whitelist=*
Table 1 describes the parameters to be modified.
Table 1 Parameter description Parameter
Description
Recommended Configuration
dataDir
Directory for storing ZooKeeper data.
Specify the value as required.
server.x
ZooKeeper service ID.
Host name (IP address): 2888:3888
autopurge.purgeInterval
Interval for clearing historical snapshots, in hours.
You can set an interval to clear data during off-peak hours.
3
autopurge.snapRetainCount
Number of latest historical snapshots that are reserved for data restoration. The minimum value is 3.
3
maxClientCnxns
Maximum number of connections between a client and the server. The default value is 60.
1000/Number of ZooKeeper nodes. The value is 333 for three nodes, 250 for four nodes, and others alike.
4lw.commands.whitelist
ZooKeeper 4-character command trustlist, which is disabled by default.
*
- Modify the configuration items to prevent the log file from being too large.
1vi log4j.properties1 2
zookeeper.log.maxfilesize=20MB zookeeper.log.maxbackupindex=100
1vi /opt/apache-zookeeper-3.6.3-bin/bin/zkEnv.sh1 2 3 4
if [ "x${ZOO_LOG4J_PROP}" = "x" ] then ZOO_LOG4J_PROP="INFO,ROLLINGFILE" fi
Table 2 describes the parameters to be modified.
- Go to the /opt/apache-zookeeper-3.6.3-bin/bin directory and open the zkServer.sh file.
1 2
cd /opt/apache-zookeeper-3.6.3-bin/bin vi zkServer.sh
- Add the following content to the file:
1export LD_LIBRARY_PATH="/opt/gcache/lib"

- Add the JAVA_HOME path to the /opt/apache-zookeeper-3.6.3-bin/bin/zkEnv.sh file.
vim /opt/apache-zookeeper-3.6.3-bin/bin/zkEnv.sh
Add the following content to the file:
1JAVA_HOME="/usr/local/jdk8u282-b08"

- Change the owner group and permission of the directory.
1chown globalcacheop:globalcache -R /opt/apache-zookeeper-3.6.3-bin
- Go to the zkData directory, create a myid file and modify it.
- Deploy the CCM ZooKeeper.
- BCM-ZK Deployment
- Deploy the BCM ZooKeeper.
- Create a zkData directory and a myid file, and modify the file.
1 2
cd /opt/apache-zookeeper-3.6.3-bin-bcm && mkdir zkData cd zkData && echo 1 > myid
On the ceph1 node, set myid to 1. On ceph2 and ceph3, set myid to 2 and 3 respectively.
- Go to the conf directory and modify the zoo.cfg file. Set the values of dataDir and server ports to actual ones.
1 2 3
cd /opt/apache-zookeeper-3.6.3-bin-bcm/conf mv zoo_sample.cfg zoo.cfg vi zoo.cfg
1 2 3 4 5 6 7 8 9 10 11 12
tickTime=1000 initLimit=10 syncLimit=2 dataDir=/opt/apache-zookeeper-3.6.3-bin-bcm/zkData clientPort=2182 server.1=ceph1:2889:3889;2182 server.2=ceph2:2889:3889;2182 server.3=ceph3:2889:3889;2182 autopurge.purgeInterval=3 autopurge.snapRetainCount=3 maxClientCnxns=333 4lw.commands.whitelist=*
Table 1 describes the parameters to be modified.
- Modify the configuration items to prevent the log file from being too large.
1vi log4j.properties1 2
zookeeper.log.maxfilesize=20MB zookeeper.log.maxbackupindex=100
1vi /opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkEnv.sh1 2 3 4
if [ "x${ZOO_LOG4J_PROP}" = "x" ] then ZOO_LOG4J_PROP="INFO,ROLLINGFILE" fi
Table 2 describes the parameters to be modified.
- Go to the /opt/apache-zookeeper-3.6.3-bin-bcm/bin directory and open the zkServer.sh file.
1 2
cd /opt/apache-zookeeper-3.6.3-bin-bcm/bin vi zkServer.sh
- Add the following content to the file:
1export LD_LIBRARY_PATH="/opt/gcache/lib"

- Add the JAVA_HOME path to the /opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkEnv.sh file.
vim /opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkEnv.sh
Add the following content to the file:
1JAVA_HOME="/usr/local/jdk8u282-b08"

- Change the owner group and permission of the directory.
1chown globalcacheop:globalcache -R /opt/apache-zookeeper-3.6.3-bin-bcm
- Create a zkData directory and a myid file, and modify the file.
- Deploy the BCM ZooKeeper.
You are advised to use a ZooKeeper cluster with an odd number of nodes. A ZooKeeper cluster with an even number of nodes may cause data inconsistency.
Configuring ZooKeeper Auto-start Upon System Startup
Configure ZooKeeper to automatically start upon system startup on BCM and CCM ZooKeeper cluster nodes. Configure this based on the ZooKeeper type of the current node. In the following example, both bcm-zk and ccm-zk are deployed on the node.
- Go to the /etc/rc.d/init.d directory, create a ZooKeeper script, and set permissions for the script.
1 2 3
cd /etc/rc.d/init.d/ touch zookeeper chmod 700 zookeeper
- Edit the ZooKeeper script.
1vim zookeeper#!/bin/bash #chkconfig:2345 20 90 #description:zookeeper #processname:zookeeper export JAVA_HOME=/usr/local/jdk8u282-b08 case $1 in start) su globalcacheop /opt/apache-zookeeper-3.6.3-bin/bin/zkServer.sh start su globalcacheop /opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkServer.sh start ;; stop) su globalcacheop /opt/apache-zookeeper-3.6.3-bin/bin/zkServer.sh stop su globalcacheop /opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkServer.sh stop ;; status) su globalcacheop /opt/apache-zookeeper-3.6.3-bin/bin/zkServer.sh status su globalcacheop /opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkServer.sh status ;; restart) su globalcacheop /opt/apache-zookeeper-3.6.3-bin/bin/zkServer.sh restart su globalcacheop /opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkServer.sh restart ;; *) echo "require start|stop|status|restart" ;; esac - Add ZooKeeper to the auto-start items.
chkconfig --add zookeeper
- Check whether the adding is successful.
chkconfig --list

Deploying Global Cache
- Configure items in gcache.conf of the server software.
vi /opt/gcache/conf/gcache.conf
- The following are all configuration items in the gcache.conf file. Configuration items marked with the comment tag (#) have default values. If you do not configure the items, the default values take effect. If you uncomment an item and assign a value, the assigned value takes effect.
- Configuration items that are not marked with the comment tag (#) are mandatory and need to be manually configured.
- In the gcache.conf file, configuration items are classified by labels. If a configuration item in label A is configured in label B, the configuration does not take effect.
- The sequence of configuration items in a label is not limited. That is, item 1 in label A can be before or after item 2, but must be in label A. The sequence of labels is also not limited. That is, label A can be before or after label B.
- The ccm label is dedicated for the CCM service. The gc, sa, and global labels are dedicated for the Global Cache service. Other labels are shared. When the services are started, configuration items in both the dedicated and shared labels are verified.
## The maximum length of directories and file paths is 128 characters. If the length exceeds the limit, the value is invalid. ## ---------------------------- log ---------------- # [log] # log_flush_interval = 500 ## Frequency of flushing logs to drives, in milliseconds. Value range: [0, 2147483647] # log_size = 20971520 ## Maximum size of backup logs, in bytes. Value range: [0, 20971520] # log_num = 100 ## Maximum number of backup log files. Value range: [0, 2147483647] # log_level = INFO ## Threshold of the level for printing logs. The value can be CRI, ERR, WARN, INFO, or DBG. # log_file_path = /var/log/gcache/ ## Log file path # log_backup_file_path = /var/log/gcache/backup/ ## Path for storing backup log files # log_file_path_tmp = /opt/gcache/tmp/ ## # log_backup_file_path_tmp = /opt/gcache/tmp/backup ## # log_disk_threshold = 80 ## range:[0,100] # log_disk_interval = 1 ## range:[0,2147483647] # log_mod_level = 386:INFO ## Log level of each module. Format: moduAId:logLevel,modiBId:logLevel # zk_log_level = zk_log_level_info ## ZooKeeper log level. The value can be zk_log_level_error, zk_log_level_warn, zk_log_level_info, or zk_log_level_debug. # log_retention_period = 31536000 ## Log retention period, in seconds ## --------------------------security------------------------ [security] tls_status=on ## Whether to enable TLS. The value can be on or off. If this parameter is set to on, all configuration items under the security label must be valid. For example, the paths of KMC and certificate files must be correctly configured. If this parameter is set to off, the configuration items under the security label are not verified during startup. # tls_version = 1.3 ## TLS version. Currently, only TLS 1.3 is supported. # max_connect = 4096 ## Maximum number of connections over SSL. Value range: [0, 40960] # portid_start = 7880 ## Start port number. Value range: [1024, 65535] # portid_end = 7889 ## End port number. Value range: [1024, 65535] # cert_check_period_days = 3 ## Certificate expiration detection period. Value range: [0, 2147483647] # cert_check_warnning_days = 90 ## Number of days to generate an alarm in advance before a certificate expires. Value range: [0, 2147483647] # tls_cipher_list = TLS_AES_256_GCM_SHA384 ## TLS algorithm. The value can be TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_AES_128_CCM_SHA256, or TLS_AES_128_CCM_8_SHA256. # cert_path = /opt/gcache/secure/Certs ## Paths of certificate files # ca_file = ca.crt ## Name of the CA certificate file, which is stored in the directory specified by cert_path. # keypass_file = identity.ks ## Name of the password file, which is stored in the directory specified by cert_path. # agent_cert_file = agent.crt ## Name of the device certificate file, which is stored in the directory specified by cert_path. # public_key_file = agent.common ## Name of the public key file, which is stored in the directory specified by cert_path. # private_key_file = agent.self ## Name of the private key file, which is stored in the directory specified by cert_path. # revoke_crl_file = revoke.crl ## Name of the CRL file, which is stored in the directory specified by cert_path. # kmc_path = /opt/gcache/secure/kmc ## Path of KMC files # kmc_primary_ksf = kmc.primary.ks ## Name of the primary KMC key file, which is stored in the directory specified by kmc_path. # kmc_standby_ksf = kmc.standby.ks ## Name of the standby KMC key file, which is stored in the directory specified by kmc_path. ## -------------------------- gcrpc ----------------- [gcrpc] ccm_address = 192.168.1.108:7910 ## Listening IP address of the RPC server of the CCM process. The value must be the same as that of public_ipv4_addr. gc_address = 192.168.1.108:7915 ## Listening IP address of the RPC server of the Global Cache process. The value must be the same as that of public_ipv4_addr. # -----------------------------proxy----------------------------- [proxy] ceph_conf_path = /opt/gcache/ceph/ceph.conf ## Ceph configuration file path ceph_keyring_path = /opt/gcache/ceph/ceph.client.admin.keyring ## Ceph keyring file path # core_number = 26,27,28,29 ## Ceph proxy core binding information. Each number indicates a CPU core ID. # bind_core = 1 ## Specifies whether to bind cores. The value 1 indicates that cores are bound, whereas 0 indicates that cores are not bound. When the value is 0, the core_number parameter is invalid. # rados_log_out_file = /var/log/gcache/proxy.log rados_mon_op_timeout = 5 ## Timeout period for the interaction between the Ceph proxy and the monitor, in seconds. If this parameter is not set or is set to 0, it indicates that the interaction does not time out. Value range: [0, 2147483647] # rados_osd_op_timeout = 0 ## OSD timeout period. The value 0 indicates that the OSD does not time out. Value range: [0, 2147483647] # --------------------------- communicate --------------------------------- [communicate] public_ipv4_addr = 192.168.1.108 ## IP address of the front-end network (network between the servers and clients), which is used for communication between the client and the server adapter. # local_port = 7880,7881 ## Ports used to receive requests from the client. Currently, a maximum of eight ports can be configured, and each port must be an unused port in the range of [1024, 65535]. zk_server_list = ceph1:2181,ceph2:2181,ceph3:2181 ## IP address configured in ZooKeeper, which is mapped from /etc/hosts in the format of ZooKeeper_server_IP_address:port. If there are multiple IP addresses, separate them with commas (,). If tls_status is set to on, set the port to 2281, which is the same as that configured in the ZooKeeper server's zoo.cfg file. This parameter does not have a default value and must be manually configured. # -----------------------------ccm---------------------------- [ccm] # replication_num = 3 ## Number of data copies. Set this parameter to 3 for multiple nodes and 1 for a standalone node. cache_node_num = 3 ## Number of nodes in the current Global Cache cluster. The value must be greater than or equal to that of replication_num and less than or equal to 128 (maximum number of nodes in the cluster). pt_num = 4096 pg_num = 1024 # temp_fault_time_out = 1800 ## Time for detecting a temporary fault. If the time expires, the fault is determined as a permanent fault. Value range: [1, 2147483647] # check_node_up_time_out = 900 ## Node startup detection time. If the time expires, it is determined that the node is faulty. Value range: [1, 2147483647] # heartbeat_timeout = 5 ## Heartbeat timeout period, in seconds. Value range: [3, 20] # heartbeat_interval = 1 ## Interval for reporting and checking heartbeats, in seconds. Value range: [1, 3] # rpc_timeout = 5 ## Value range: [5, 15] ccm_monitor = 1 ## Whether the node is used as the CCM deployment node. 1: yes; 0: no fault_domain = node ## CCM fault domain. The value can be node or rack. # write_op_throttle = 200 ## Limits the number of write operations that are not returned. The value 0 indicates that the number is not limited. # read_op_throttle = 0 ## Limits the number of read operations that are not returned. The value 0 indicates that the number is not limited. # write_bw_throttle = 600000 ## Limits the bandwidth (in Kbit/s) of write operations that are not returned. The value 0 indicates that the bandwidth is not limited. # read_bw_throttle = 0 ## Limits the bandwidth (in Kbit/s) of read operations that are not returned. The value 0 indicates that the bandwidth is not limited. [gc] cluster_ipv4_addr = 192.168.2.108 ## IP address of the back-end network (network between servers), which is used for communication between plogs (persistence layer). ## ------------------------ cluster Hb ------------- # # Cluster heartbeat parameters. Timeout period = retry_times x retry_interval + interval. The default value is 5s. # [clusterHb] # interval = 1 ## Duration (in seconds) after which a keepalive detection is sent if no data is sent over the connection. Value range: [1, 3] # retry_times = 4 ## Maximum number of retries before the connection is closed. Value range: [3, 10] # retry_interval = 1 ## Interval between two consecutive detections, in seconds. Value range: [1, 3] # -----------------------------sa----------------------------- # [sa] # core_number_64 = 18,19,20,21,22,23,24,25,26,27 ## Server adapter core binding information. Each number indicates a CPU core ID. # core_number_96 = 72,73,74,75,76,77,78,79,80,81,82,83 # core_number_128 = 72,73,74,75,76,77,78,79,80,81,82,83 # queue_amount = 8 ## Number of message queues. Value range: [4, 5000] # queue_max_capacity = 512 ## Maximum capacity of a message queue. Value range: [1, 1024] # msgr_amount = 5 ## Number of msgr-workers of the server adapter. Value range: [1, 16] # bind_core = 1 ## Whether to enable core binding for the messenger thread. The value can be 0 or 1. # bind_queue_core = 1 ## Whether to enable pthread core binging. The value can be 0 or 1. # write_qos = 1 ## Whether to enable QoS of the Wcache. The value 0 indicates that QoS is disabled. # get_quota_cyc = 1000 ## Interval for reading the Wcache bandwidth quota when Wcache QoS is enabled, in ms. # enable_messenger_throttle = 1 ## Whether to enable QoS of the Ceph messenger. The value 0 indicates that QoS is disabled. # sa_op_throttle = 30000 ## Limits the number of operations that are not returned. The value 0 indicates that the number is not limited. #-------------------------sa ceph------------------------- [global] ms_connection_idle_timeout = 259200 ## Network timeout period, that is, the amount of time that an idle connection can be retained, in seconds. # The following three parameters are used to configure the throttle of Ceph messenger. The parameters are valid only when Ceph messenger QoS is enabled. osd_client_message_size_cap = 8589934592 osd_client_message_cap = 5000000000 ms_dispatch_throttle_bytes = 1258291200
- Copy the Ceph configuration to the /opt/gcache/ceph directory.
1 2 3 4
cp /etc/ceph/ceph.conf /opt/gcache/ceph/ cp /etc/ceph/ceph.client.admin.keyring /opt/gcache/ceph/ chown globalcache:globalcache -R /opt/gcache/ceph/ chmod 640 /opt/gcache/ceph/*
- Run the touch /etc/sudoers.d/globalcache-smartctl command to create a file.
In this example, the NVMe namespace is 1. Modify this value for other namespaces accordingly.
1 2
touch /etc/sudoers.d/globalcache-smartctl vi /etc/sudoers.d/globalcache-smartctl
globalcache ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9]n1 globalcache ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9]n1p[0-9] globalcache ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9]n1p[0-9][0-9] globalcache ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1 globalcache ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1p[0-9] globalcache ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1p[0-9][0-9] globalcache ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1p[0-9][0-9][0-9] globalcacheop ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9]n1 globalcacheop ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9]n1p[0-9] globalcacheop ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9]n1p[0-9][0-9] globalcacheop ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1 globalcacheop ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1p[0-9] globalcacheop ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1p[0-9][0-9] globalcacheop ALL=NOPASSWD:/usr/sbin/smartctl -i /dev/nvme[0-9][0-9]n1p[0-9][0-9][0-9]
- Modify the bdm.conf configuration file in the /opt/gcache/conf directory based on the NVMe drive usage.
Run the following command to check the system drives and determine which drive can be used as the BDM partition:
1lsblk

As shown in the preceding figure, nvme0n1p13 and nvme1n1p13 are available. They are both 2.7 TB. Configure the pool size in 7 based on site requirements.
In the bdm.conf configuration file, path/to/your/disk is the default BDM device. Change them based on site requirements. Generally, the WAL and DB partitions of Ceph use part of the NVMe space and you need to manually create the NVMe partition of BDM. Create /dev/nvme0n1p13 and /dev/nvme1n1p13 in the last two lines of the partition.sh script (see Deploying OSD Nodes). Modify the bdm.conf configuration file as follows:
1 2 3 4 5 6 7 8 9
#metapool:id:<pool_id>:segmentSize:<size>:name:<name> #datapool:id:<pool_id>:segmentSize:<size>:name:<name> metapool:id:0:segmentSize:4096:name:metapool metapool:id:1:segmentSize:262144:name:headpool datapool:id:2:segmentSize:4194304:name:datapool #device:id:<disk_id>:size:<size>:status:<status>:name:<name> device:id:0:sn:0:size:0:status:0:name:/dev/nvme0n1p13 device:id:1:sn:0:size:0:status:0:name:/dev/nvme1n1p13
In the last two lines of the previous command output, the drive letters in bold indicate the NVMe drives.
Ensure that the Cache Cluster Manager (CCM) of the current version has two partitions. Otherwise, the CCM fails to be started.
In the current version, ensure that WCachePool has 180 GB space, IndexPool has 700 GB space, and RCachePool uses the remaining space in 7.
- Modify the permission on the NVMe partitions. The following uses /dev/nvme0n1p13 and /dev/nvme1n1p13 as an example.
1 2
chown globalcache:globalcache /dev/nvme0n1p13 chown globalcache:globalcache /dev/nvme1n1p13
- Grant the execute permission on the startup script.
1chmod 700 /etc/rc.d/rc.local
Open rc.local.1vi /etc/rc.d/rc.localAdd the following content:
1 2
chown globalcache:globalcache /dev/nvme0n1p13 chown globalcache:globalcache /dev/nvme1n1p13
- Format BDM and create WCachePool, RCachePool, IndexPool, and StreamPool.
Before formatting BDM, ensure that the gcache.conf content is correct. Otherwise, the initialization fails. For details about the cause of a failure, see /var/log/messages.
Recommended configurations:
When two 3.2 TB NVMe drives are used, configuration 1 is recommended.
When two 7.68 TB NVMe drives are used, configuration 2 is recommended.
Configuration 1:1 2 3 4 5
sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_format /opt/gcache/conf/bdm.conf --force sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 4194304 180G WCachePool sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 67108864 3500G RCachePool sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 67108864 700G IndexPool sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 4194304 20G StreamPool
Configuration 2:
1 2 3 4 5
sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_format /opt/gcache/conf/bdm.conf --force sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 4194304 180G WCachePool sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 67108864 7000G RCachePool sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 67108864 700G IndexPool sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_createCapPool 4194304 20G StreamPool
- Run the bdm_df command to view the information.
1sudo -u globalcacheop LD_LIBRARY_PATH=/opt/gcache/lib /opt/gcache/bin/bdm_df

Check whether pool IDs 3, 4, 5, and 6 are successfully created, and whether the pool IDs are consistent with the creation sequence in 3.
- Create a sysctl.conf file and set the number of virtual memory areas (VMAs) that a process can have.
1vi /etc/sysctl.conf- Add the following content:
1 2 3 4 5 6 7
# sysctl settings are defined through files in # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/. ## Vendors settings live in /usr/lib/sysctl.d/. # To override a whole file, create a new file with the same in # /etc/sysctl.d/ and put new settings there. To override# only specific settings, add a file with a lexically later # name in /etc/sysctl.d/ and put new settings there.## For more information, see sysctl.conf(5) and sysctl.d(5). vm.max_map_count = 1000000
- Make the modification take effect.
1sysctl -p
- Add the following content:
- Create an O&M user and disable remote login of the root user for security purposes.
- Remove the restriction that forbids common users to use the su command.
vi /etc/pam.d/su
Use the comment tag (#) to comment out the line in the red box.

- Disable the remote login of the root user.
1vi /etc/ssh/sshd_configChange the value of PermitRootLogin to no.

- Run the following command to restart the SSHD service to make the configuration take effect:
1systemctl restart sshd.service
- Change the validity period of the O&M account password to 90 days.
1passwd -x 90 globalcacheop

Set the number of days for generating an alarm before the O&M account expires to 7.
1passwd -w 7 globalcacheop

Enable the O&M account password to be changed within 35 days after it expires.
1passwd -i 35 globalcacheop

- Remove the restriction that forbids common users to use the su command.
After the cluster is started, do not run zkCli.sh in the /opt/apache-zookeeper-3.6.3-bin-bcm/bin or /opt/apache-zookeeper-3.6.3-bin/bin directory to modify the ZooKeeper cluster information. Otherwise, serious problems may occur.