1 | vim /etc/libvirt/qemu.conf
|
1 | vim /usr/lib/systemd/system/libvirtd.service
|
1 2 | Environment="LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/gcache_adaptor_compile/third_part/lib/" Environment="C_INCLUDE_PATH=$C_INCLUDE_PATH:/opt/gcache_adaptor_compile/third_part/inc/" |
1 2 | systemctl daemon-reload systemctl start libvirtd |
请在服务端所有节点执行以下操作。
1 | cd /opt/apache-zookeeper-3.6.3-bin/bin |
1 | sh zkServer.sh start |
1 | sh zkServer.sh status |
集群被分为一个leader节点和多个follower节点,因此查询回显如下所示。
1 | Mode: leader
|
Mode: follower
若要终止ZooKeeper服务端,执行命令:
1 | sh zkServer.sh stop |
请在服务端所有节点执行以下操作。
1 | cd /opt/apache-zookeeper-3.6.3-bin-bcm/bin |
1 | sh zkServer.sh start |
1 | sh zkServer.sh status |
集群被分为一个leader节点和多个follower节点,因此查询回显如下所示。
1 | Mode: leader
|
Mode: follower
若要终止ZooKeeper服务端,执行命令:
1 | sh zkServer.sh stop |
在启动全局缓存之前,需将ZooKeeper清理干净,否则可能导致未知错误。
1 | vim zk_clean.sh
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | ZK_CLI_PATH="/opt/apache-zookeeper-3.6.3-bin/bin/zkCli.sh" echo 'deleteall /ccdb' >> ./zk_clear.txt echo 'deleteall /ccm_cluster' >> ./zk_clear.txt echo 'deleteall /pool' >> ./zk_clear.txt echo 'deleteall /pt_view' >> ./zk_clear.txt echo 'deleteall /alarm' >> ./zk_clear.txt echo 'deleteall /snapshot_manager' >> ./zk_clear.txt echo 'deleteall /ccm_clusternet_link' >> ./zk_clear.txt echo 'deleteall /tls' >> ./zk_clear.txt echo 'ls /' >> ./zk_clear.txt echo 'quit' >> ./zk_clear.txt cat < ./zk_clear.txt | sh ${ZK_CLI_PATH} echo > ./zk_clear.txt rm -rf ./zk_clear.txt |
1 | sh zk_clean.sh
|
如下图所示,[]中仅有zookeeper为清理成功。
1 | vim zk_clean.sh
|
1 2 3 4 5 6 7 | ZK_CLI_PATH="/opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkCli.sh -server localhost:2182" echo 'deleteall /bcm_cluster' >> ./zk_clear.txt echo 'ls /' >> ./zk_clear.txt echo 'quit' >> ./zk_clear.txt cat < ./zk_clear.txt | sh ${ZK_CLI_PATH} echo > ./zk_clear.txt rm -rf ./zk_clear.txt |
1 | sh bcm_zk_clear.sh
|
如下入所示,[]中仅有zookeeper为清理成功。
1 | sudo systemctl start GlobalCache.target |
1 2 3 | export LD_LIBRARY_PATH="/opt/gcache/lib" cd /opt/gcache/bin ./bdm_df |
查看各pool占用情况是否发生合理变化。
在客户端上执行发IO和fio的命令,检查服务端和客户端之间是否可以进行正常的读写操作以及性能情况。
1 | source /etc/profile |
1 | ceph osd pool create rbd 128 |
1 | ceph df
|
如下图所示,RBD的pool ID为782。
详细操作请参见《全局缓存 特性指南》中“使用bcmtool”相关内容。
更新前:
更新后:
1 2 | cd /opt/gcache_adaptor_compile/third_part/bin/ ./bcmtool_c import |
导入成功的信息如下图。
1 | rbd create foo --size 1G --pool rbd --image-format 2 --image-feature layering |
1 | rbd bench --io-type rw --io-pattern rand --io-total 4K --io-size 4K rbd/foo |
1 2 | fio -name=test -ioengine=rbd -clientname=admin -pool=rbd -rbdname=foo -direct=1 -size=8K -bs=4K -rw=write --verify_pattern=0x12345678 fio -name=test -ioengine=rbd -clientname=admin -pool=rbd -rbdname=foo -direct=1 -size=4K -bs=4K -rw=write --verify_pattern=0x8888888 -offset=4K |
1 2 | ceph df rbd -p rbd --image foo info |
1 2 | cd /opt/gcache/bin ./bdm_df |