启动zookeeper服务和GlobalCache服务需要使用globalcacheop用户。
vi /etc/libvirt/qemu.conf
vi /usr/lib/systemd/system/libvirtd.service
在【service】字段中,添加以下内容:
Environment="LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/gcache_adaptor_compile/third_part/lib/" Environment="C_INCLUDE_PATH=$C_INCLUDE_PATH:/opt/gcache_adaptor_compile/third_part/inc/"
systemctl daemon-reload systemctl start libvirtd
1
|
cd /opt/apache-zookeeper-3.6.3-bin/bin |
1
|
sh zkServer.sh start |
1
|
sh zkServer.sh status |
如果是集群状态最后会有一个模式,多台机器会根据某种算法产生一个leader和多个follower。
1
|
Mode: follower / Mode: leader (集群模式) |
若要终止ZooKeeper服务端,执行命令:
1
|
sh zkServer.sh stop |
1
|
cd /opt/apache-zookeeper-3.6.3-bin-bcm/bin |
1
|
sh zkServer.sh start |
1
|
sh zkServer.sh status |
如果是集群状态最后会有一个模式,多台机器会根据某种算法产生一个leader和多个follower。
1
|
Mode: follower / Mode: leader (集群模式) |
若要终止ZooKeeper服务端,执行命令:
1
|
sh zkServer.sh stop |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
ZK_CLI_PATH="/opt/apache-zookeeper-3.6.3-bin/bin/zkCli.sh" echo 'deleteall /ccdb' >> ./zk_clear.txt echo 'deleteall /ccm_cluster' >> ./zk_clear.txt echo 'deleteall /pool' >> ./zk_clear.txt echo 'deleteall /pt_view' >> ./zk_clear.txt echo 'deleteall /alarm' >> ./zk_clear.txt echo 'deleteall /snapshot_manager' >> ./zk_clear.txt echo 'deleteall /ccm_clusternet_link' >> ./zk_clear.txt echo 'deleteall /tls' >> ./zk_clear.txt echo 'ls /' >> ./zk_clear.txt echo 'quit' >> ./zk_clear.txt cat < ./zk_clear.txt | sh ${ZK_CLI_PATH} echo > ./zk_clear.txt rm -rf ./zk_clear.txt |
sh zk_clean.sh
1 2 3 4 5 6 7 |
ZK_CLI_PATH="/opt/apache-zookeeper-3.6.3-bin-bcm/bin/zkCli.sh -server localhost:2182" echo 'deleteall /bcm_cluster' >> ./zk_clear.txt echo 'ls /' >> ./zk_clear.txt echo 'quit' >> ./zk_clear.txt cat < ./zk_clear.txt | sh ${ZK_CLI_PATH} echo > ./zk_clear.txt rm -rf ./zk_clear.txt |
sh bcm_zk_clear.sh
确保ZooKeeper清理干净才可正常启动GlobalCache,否则会产生预期外的错误。
1
|
sudo systemctl start GlobalCache.target |
1 2 3 |
export LD_LIBRARY_PATH="/opt/gcache/lib" cd /opt/gcache/bin ./bdm_df |
查看各pool占用情况是否发生合理变化。
1
|
source /etc/profile |
1
|
ceph osd pool create rbd 128 |
1
|
ceph df
|
如下,rbd的pool id为782
下图为bcm.xml更新后(这里只做pool增量更新的说明,详细的参考bcmtool使用。)
1 2 |
cd /opt/gcache_adaptor_compile/third_part/bin/ ./bcmtool_c import |
导入成功的信息如下图。
1
|
rbd create foo --size 1G --pool rbd --image-format 2 --image-feature layering |
1
|
rbd bench --io-type rw --io-pattern rand --io-total 4K --io-size 4K rbd/foo |
1 2 |
fio -name=test -ioengine=rbd -clientname=admin -pool=rbd -rbdname=foo -direct=1 -size=8K -bs=4K -rw=write --verify_pattern=0x12345678 fio -name=test -ioengine=rbd -clientname=admin -pool=rbd -rbdname=foo -direct=1 -size=4K -bs=4K -rw=write --verify_pattern=0x8888888 -offset=4K |
1 2 |
ceph df rbd -p rbd --image foo info |
1 2 |
cd /opt/gcache/bin ./bdm_df |