Configuring the Environment
Before performing the installation and deployment procedures in the following sections, you need to configure the environment, which requires configuring the download source, disabling the firewall, and configuring host names, Network Time Protocol (NTP), password-free login, as well as disabling SELinux.
Configuring the Download Source
- Confirm the openEuler repository on all server nodes.
- Ensure that the openEuler.repo file matches the openEuler repository version.
vi /etc/yum.repos.d/openEuler.repo
- The content of openEuler.repo of openEuler 20.03 is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
[OS] name=OS baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/OS/$basearch/ enabled=1 gpgcheck=0 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/everything/$basearch/ enabled=1 gpgcheck=0 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/EPOL/$basearch/ enabled=1 gpgcheck=0 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler [debuginfo] name=debuginfo baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/debuginfo/$basearch/ enabled=1 gpgcheck=0 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/debuginfo/$basearch/RPM-GPG-KEY-openEuler [source] name=source baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/source/ enabled=1 gpgcheck=0 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/source/RPM-GPG-KEY-openEuler [update] name=update baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/update/$basearch/ enabled=1 gpgcheck=0 gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
- The content of openEuler.repo of openEuler 22.03 is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
[OS] name=OS baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler [everything] name=everything baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/everything/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/everything/$basearch/RPM-GPG-KEY-openEuler [EPOL] name=EPOL baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler [debuginfo] name=debuginfo baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/debuginfo/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/debuginfo/$basearch/RPM-GPG-KEY-openEuler [source] name=source baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/source/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/source/RPM-GPG-KEY-openEuler [update] name=update baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/update/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler [update-source] name=update-source baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/update/source/ enabled=1 gpgcheck=1 gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/source/RPM-GPG-KEY-openEuler
- The content of openEuler.repo of openEuler 20.03 is as follows:
- After confirming that the content is correct, type q! and press Enter to exit the file.
- Ensure that the openEuler.repo file matches the openEuler repository version.
- Configure the KSAL-enabled Ceph RPM package compiled in Compiling the Ceph Installation Package as the local repository on all nodes.
- Create and go to the ceph-ksal directory.
mkdir -p /home/ceph-ksal cd /home/ceph-ksal
- Place the RPM package compiled in Compiling the Ceph Installation Package in the ceph-ksal directory and decompress it.
cp /home/rpmbuild/RPMS/ceph-ksal-rpm.tar.gz /home/ceph-ksal tar -zxvf ceph-ksal-rpm.tar.gz
- Create a local repository.
createrepo .
- Open the local.repo file.
vi /etc/yum.repos.d/local.repo
- Press i to enter the insert mode and add the following contents to the end of the file:
[ceph-spdk] name=ceph-ksal baseurl=file:///home/ceph-ksal enabled=1 gpgcheck=0 priority=1
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Create and go to the ceph-ksal directory.
- On all nodes, configure pip to download from Huawei Mirrors to accelerate the download.
- Create a .pip directory and a pip.conf file in the directory.
1 2
mkdir -p ~/.pip vi ~/.pip/pip.conf
- Press i to enter the insert mode and add the following content:
1 2 3 4
[global] timeout = 120 index-url = https://repo.huaweicloud.com/repository/pypi/simple trusted-host = repo.huaweicloud.com
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Create a .pip directory and a pip.conf file in the directory.
Disabling the Firewall
The firewall security mechanism enabled by default in a Linux environment prevents normal connections between components. As a result, the Ceph cluster cannot be deployed normally. This is the behavior of Linux itself, and the Kunpeng BoostKit for SDS KSAL does not provide a solution to this issue. If you want to enable the firewall in your own system, please find a solution by yourself.
This document provides a method for quickly disabling the firewall. Host OSs are not within the delivery scope of the Kunpeng BoostKit for SDS KSAL. The firewall configuration method provided in this document is for reference only and is not a part of commercial delivery. Therefore, no commercial commitment is made for firewall configuration. If you want to put it into commercial use, you shall evaluate and bear the risks.
Disabling the firewall may cause security issues. If you do not plan to enable the firewall, it is recommended that an end-to-end solution be used to eliminate the risks caused by disabling the firewall. You shall bear the security risks by yourself. If you need to use the firewall, you are advised to configure fine-grained firewall rules and enable related ports and protocols based on your requirements to ensure the security of the entire system.
1 2 3 |
systemctl stop firewalld.service systemctl disable firewalld.service systemctl status firewalld.service |
Configuring Host Names
Configure the permanent static host names. You are advised to set the names of the server nodes to ceph1 to ceph3 and that of the client nodes to client1 to client3.
- Configure node names.
- Configure the node name of ceph1.
1hostnamectl --static set-hostname ceph1
- Configure the node names of ceph2 and ceph3 using commands similar to that in 1.a.
- Configure the node name of client1.
1hostnamectl --static set-hostname client1
- Configure the node names of client2 and client3 using commands similar to that in 1.c.
- Configure the node name of ceph1.
- Modify the domain name resolution file on all the nodes.
- Open the file.
1vi /etc/hosts - Press i to enter the insert mode and add the following content to the file:
1 2 3 4 5 6
192.168.3.166 ceph1 192.168.3.167 ceph2 192.168.3.168 ceph3 192.168.3.160 client1 192.168.3.161 client2 192.168.3.162 client3
- The example IP addresses are those IP addresses planned in kunpengksal_16_0003.html#EN-US_TOPIC_0000001876040970__section3657145318232. Replace them with the actual ones. You can run the ip a command to query the actual IP addresses.
- In this document, the cluster consists of three server nodes and three client nodes. Adjust the file content based on the actual number of nodes.
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Open the file.
Configuring NTP
Ceph automatically checks the time between the nodes in the cluster. If the time difference between the nodes is large, an alarm is generated. Therefore, configure clock synchronization between the nodes.
- Install the NTP service on each node.
yum -y install ntp ntpdate
- Back up the original configuration on each node.
1cd /etc && mv ntp.conf ntp.conf.bak
- Configure ceph1 as the NTP server node.
- Create an NTP file on ceph1.
1vi /etc/ntp.conf - Press i to enter the insert mode and add the following content.
1 2 3 4 5
restrict 127.0.0.1 restrict ::1 restrict 192.168.3.0 mask 255.255.255.0 server 127.127.1.0 fudge 127.127.1.0 stratum 8
The line restrict 192.168.3.0 mask 255.255.255.0 indicates the public network IP address and mask of ceph1.
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Create an NTP file on ceph1.
- Configure all nodes except ceph1 as NTP client nodes.
- Create an NTP file on all nodes except ceph1.
1vi /etc/ntp.conf - Press i to enter the insert mode and add the following content. This IP address is the IP address of ceph1.
1server 192.168.3.166
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Create an NTP file on all nodes except ceph1.
- Start the NTP service.
- Start the NTP service on ceph1 and check the service status.
1 2 3
systemctl start ntpd.service systemctl enable ntpd.service systemctl status ntpd.service
The command output is as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
[root@ceph1 ~]# systemctl start ntpd.service [root@ceph1 ~]# systemctl enable ntpd.service [root@ceph1 ~]# systemctl status ntpd.service ● ntpd.service - Network Time Service Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2023-09-01 10:14:56 CST; 8s ago Main PID: 129667 (ntpd) Tasks: 2 Memory: 3.5M CGroup: /system.slice/ntpd.service └─129667 /usr/sbin/ntpd -u ntp:ntp -g Sep 01 10:14:56 ceph1 ntpd[129667]: Listen normally on 4 bond2 192.168.2.128:123 Sep 01 10:14:56 ceph1 ntpd[129667]: Listen normally on 5 bond1 192.168.1.128:123 Sep 01 10:14:56 ceph1 ntpd[129667]: Listen normally on 6 lo [::1]:123 Sep 01 10:14:56 ceph1 ntpd[129667]: Listen normally on 7 enp125s0f0 [fe80::5d5a:faa4:75a4:4afa%2]:123 Sep 01 10:14:56 ceph1 ntpd[129667]: Listen normally on 8 bond2 [fe80::a975:9916:4607:a50b%12]:123 Sep 01 10:14:56 ceph1 ntpd[129667]: Listen normally on 9 bond1 [fe80::147b:9453:cfc0:d19f%13]:123 Sep 01 10:14:56 ceph1 ntpd[129667]: Listening on routing socket on fd #26 for interface updates Sep 01 10:14:56 ceph1 ntpd[129667]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized Sep 01 10:14:56 ceph1 ntpd[129667]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized Sep 01 10:14:56 ceph1 systemd[1]: Started Network Time Service.
- After the NTP service is started, wait 5 minutes and then forcibly synchronize time of NTP client nodes with that of the NTP server node (ceph1).
1ntpdate ceph1
- Write the hardware clock to all nodes except ceph1 to prevent configuration failures after the service restarts.
1hwclock -w
- Install and start the crontab tool on all nodes except ceph1.
1 2 3 4
yum install -y crontabs systemctl enable crond.service systemctl start crond.service crontab -e
- Press i to enter the insert mode and add the following content to enable the system to automatically synchronize time with ceph1 every 10 minutes:
1*/10 * * * * /usr/sbin/ntpdate 192.168.3.166
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Start the NTP service on ceph1 and check the service status.
Configuring Password-Free Login
- Generate a public key on ceph1.
ssh-keygen -t rsa
Press Enter to use the default configuration.
- On ceph1, issue the public key to other nodes in the cluster.
for i in {1..3};do ssh-copy-id ceph$i;done for i in {1..3};do ssh-copy-id client$i;doneConfirm the operation and enter the password of the root user as prompted.
Disabling SELinux
The SELinux security mechanism enabled by default in a Linux environment prevents normal connections between components. As a result, the Ceph cluster cannot be deployed normally. This is the behavior of Linux itself, and the Kunpeng BoostKit for SDS does not provide a solution to this issue. If you want to use SELinux in your own system, please find a solution by yourself.
We provide a method for quickly disabling SELinux. The SELinux configuration method provided in the Kunpeng BoostKit for SDS is for reference only. You need to evaluate the method and bear related risks.
Disabling SELinux may cause security issues. If you do not plan to enable SELinux, it is recommended that an end-to-end solution be used to eliminate the risks caused by disabling SELinux. You shall bear the security risks by yourself. If you need to enable SELinux, configure fine-grained security rules based on actual SELinux issues to ensure system security.
Disable SELinux on all nodes.
- Method 1: Disable the permissive mode temporarily. The configuration becomes invalid after the server is restarted.
setenforce permissive
- Method 2: Disable the permissive mode permanently. The configuration takes effect automatically when the server is restarted.
- Open the config file.
1vi /etc/selinux/config - Press i to enter the insert mode and set SELINUX to permissive.
1 2 3 4 5 6 7 8 9 10 11 12 13
[root@ceph1 ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
- Press Esc to exit the insert mode. Type :wq! and press Enter to save and exit the file.
- Open the config file.