Verifying Global Cache
Log in as the globalcacheop user to use the ZooKeeper and Global Cache services.
Configuring RBD and QEMU
- Modify the QEMU configuration file and delete uncomment user and group.
vi /etc/libvirt/qemu.conf

- Configure the Global Cache environment variable to the libvirtd service.
vi /usr/lib/systemd/system/libvirtd.service
Add the following content to the service field:
Environment="LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/gcache_adaptor_compile/third_part/lib/" Environment="C_INCLUDE_PATH=$C_INCLUDE_PATH:/opt/gcache_adaptor_compile/third_part/inc/"

- Restart the libvirt service.
systemctl daemon-reload systemctl start libvirtd
Starting the CCM ZooKeeper
- Log in as the globalcacheop user.
- Go to the /opt/apache-zookeeper-3.6.3-bin/bin directory.
1cd /opt/apache-zookeeper-3.6.3-bin/bin
- Start the ZooKeeper server.
1sh zkServer.sh start
- Query the ZooKeeper server status.
1sh zkServer.sh status
If the servers are working in cluster state, there is a mode at the end of the command output. The cluster has one leader and multiple followers based on a certain algorithm.
1Mode: follower / Mode: leader
To stop the ZooKeeper server, run the following command:
1sh zkServer.sh stop
Starting the BCM ZooKeeper
- Log in as the globalcacheop user.
- Go to the /opt/apache-zookeeper-3.6.3-bin-bcm/bin directory.
1cd /opt/apache-zookeeper-3.6.3-bin-bcm/bin
- Start the ZooKeeper server.
1sh zkServer.sh start
- Query the ZooKeeper server status.
1sh zkServer.sh status
If the servers are working in cluster state, there is a mode at the end of the command output. The cluster has one leader and multiple followers based on a certain algorithm.
1Mode: follower / Mode: leader
To stop the ZooKeeper server, run the following command:
1sh zkServer.sh stop
Clearing ZooKeeper
- Log in as the globalcacheop user.
- Create a zk_clean.sh script for the CCM ZooKeeper cluster.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
ZK_CLI_PATH="/opt/apache-zookeeper-3.6.3-bin/bin/zkCli.sh" echo 'deleteall /ccdb' >> ./zk_clear.txt echo 'deleteall /ccm_cluster' >> ./zk_clear.txt echo 'deleteall /pool' >> ./zk_clear.txt echo 'deleteall /pt_view' >> ./zk_clear.txt echo 'deleteall /alarm' >> ./zk_clear.txt echo 'deleteall /snapshot_manager' >> ./zk_clear.txt echo 'deleteall /ccm_clusternet_link' >> ./zk_clear.txt echo 'deleteall /tls' >> ./zk_clear.txt echo 'ls /' >> ./zk_clear.txt echo 'quit' >> ./zk_clear.txt cat < ./zk_clear.txt | sh ${ZK_CLI_PATH} echo > ./zk_clear.txt rm -rf ./zk_clear.txt
- Run the zk_clean.sh script for the CCM ZooKeeper cluster.
sh zk_clean.sh
- Check whether ZooKeeper is cleared. If only "zookeeper" exists in the brackets [], as shown in the following figure, ZooKeeper is successfully cleared.

- Create a bcm_zk_clear.sh script for the BCM ZooKeeper.
1 2 3 4 5 6 7
ZK_CLI_PATH="/opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkCli.sh -server localhost:2182" echo 'deleteall /bcm_cluster' >> ./zk_clear.txt echo 'ls /' >> ./zk_clear.txt echo 'quit' >> ./zk_clear.txt cat < ./zk_clear.txt | sh ${ZK_CLI_PATH} echo > ./zk_clear.txt rm -rf ./zk_clear.txt
- Run the bcm_zk_clear.sh script for the BCM ZooKeeper cluster.
sh bcm_zk_clear.sh
- Check whether ZooKeeper is cleared. If only "zookeeper" exists in the brackets [], as shown in the following figure, ZooKeeper is successfully cleared.

Global Cache can be started only after ZooKeeper is cleared. Otherwise, exceptions may occur.
Starting Global Cache
- After installing and deploying the server and client software, log in as the globalcacheop user, and start the server software on all server nodes.
- Ensure that the server has at least 180 GB memory.
- To prevent the server from occupying too much buffer or cache for refreshing logs and the free memory from being not reclaimed for a long time, you are advised to uninstall the openEuler-performance optimization package and modify the kernel configuration.
- echo 20971520 > /proc/sys/vm/min_free_kbytes
- For details, see Kunpeng BoostKit for SDS Global Cache Tuning Guide.
1sudo systemctl start GlobalCache.target
- Start a new server node and check the usage of each pool.
1 2 3
export LD_LIBRARY_PATH="/opt/gcache/lib" cd /opt/gcache/bin ./bdm_df
Check whether the usage of each pool changes properly.
Verifying Global Cache
- On the client, run the io and fio commands to check whether read and write operations can be performed properly between the server and client and whether the performance is normal.
- Make the environment variables take effect.
1source /etc/profile
- Create a pool request on the client Ceph.
1ceph osd pool create rbd 128
- Run the ceph df command to check the pool ID.
1ceph dfAs shown in the following figure, the pool ID of the RBD is 782.

- Update the pool ID of the RBD to the bcm.xml file and use the BCM to import the file again.
The following figure shows the content of the bcm.xml file before the update.

The following figure shows the bcm.xml file after the update. (This section describes how to update the file when pools are added. For details, see Using the BCM Tool.)
Import the file.1 2
cd /opt/gcache_adaptor_compile/third_part/bin/ ./bcmtool_c import
The following information indicates that the import is successful.

- Create an image request on the Ceph client.
1rbd create foo --size 1G --pool rbd --image-format 2 --image-feature layering
- Perform I/O tests.
1rbd bench --io-type rw --io-pattern rand --io-total 4K --io-size 4K rbd/foo
- Use the fio tool to perform read and write operations.
1 2
fio -name=test -ioengine=rbd -clientname=admin -pool=rbd -rbdname=foo -direct=1 -size=8K -bs=4K -rw=write --verify_pattern=0x12345678 fio -name=test -ioengine=rbd -clientname=admin -pool=rbd -rbdname=foo -direct=1 -size=4K -bs=4K -rw=write --verify_pattern=0x8888888 -offset=4K
- Check the Ceph pool status and image information.
1 2
ceph df rbd -p rbd --image foo info
- Make the environment variables take effect.
- Check whether data is correctly written to Global Cache on the server.
1 2
cd /opt/gcache/bin ./bdm_df