Configuring OSD Nodes
- Confirm available drives on all server nodes where OSDs are to be deployed. Before creating OSD partitions, run the following command to check drive usage on the server:
1lsblk -T
In the preceding figure, /dev/sda, /dev/sdc, and /dev/sdi are system drives. Available HDDs include /dev/sdd and /dev/sde, and available SSDs include /dev/nvme0n1.
Some drives may be used as data drives in a previous Ceph cluster or have installed an OS. These drives may have uncleared partitions. You can clear partitions based on How Do I Clear Residual Data Partitions?.
- Create OSD partitions. Perform the following operations on all nodes where OSDs are to be deployed to reset drives and plan WAL and DB partitions according to the NVMe drive space.
1 2
parted /dev/nvme0n1 mkpart primary gpt parted /dev/nvme1n1 mkpart primary gpt

Create one 60 GB partition as the WAL partition and one 180 GB partition as the DB partition on /dev/nvme0n1.
1 2 3
parted /dev/nvme0n1 mkpart primary ext4 1 60G parted /dev/nvme1n1 mkpart primary ext4 1 180G

- No update is required here. The parted command takes effect in real time. After the partition commands are executed, the partition table data is written to the drive. The /etc/fstab file stores automatic partition mounting configurations. It automatically mounts a partition to a specified mount point after the system is restarted.
- In a production environment, you are advised to plan the DB partition and WAL partition on the same drive to improve performance. This section is only for reference.
For details about how to quickly create multiple WAL and DB partitions, see How Do I Quickly Create Multiple WAL and DB Partitions?.
Parent topic: Configuring the Environment