部署OSD后集群中OSD节点状态不能变为up状态
问题现象
重启服务器节点后,部署OSD时,节点上所有的OSD状态均为in状态,部分OSD守护进程能正常启动,部分OSD守护进程无法启动。根据5查看机器大页内存有剩余。其中查看cephadm的日志时有类似错误信息如下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | [2025-03-15 17:49:42.399173] --base-virtaddr=0x200000000000 [2025-03-15 17:49:42.399182] --match-allocations [2025-03-15 17:49:42.399191] --file-prefix=spdk_pid152 [2025-03-15 17:49:42.399200] ] EAL: No free 2048 kB hugepages reported on node 1 EAL: No free 2048 kB hugepages reported on node 2 EAL: No free 2048 kB hugepages reported on node 3 EAL: No free 524288 kB hugepages reported on node 1 EAL: No free 524288 kB hugepages reported on node 2 EAL: No free 524288 kB hugepages reported on node 3 TELEMETRY: No legacy callbacks, legacy socket not created [2025-03-15 17:50:12.491533] nvme_ctrlr.c:3238:nvme_ctrlr_process_init: *ERROR*: Initialization timed out in state 8 [2025-03-15 17:50:12.491716] nvme.c: 710:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 0000:88:00.0 [2025-03-15 17:50:12.491743] nvme_pcie_common.c: 677:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command [2025-03-15 17:50:12.491759] nvme_qpair.c: 248:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:23 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2b5c991000 PRP2 0x0 [2025-03-15 17:50:12.491775] nvme_qpair.c: 452:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 [2025-03-15 17:50:12.491791] nvme_ctrlr.c:1520:nvme_ctrlr_identify_done: *ERROR*: nvme_identify_controller failed! [2025-03-15 17:50:12.491801] nvme_ctrlr.c: 891:nvme_ctrlr_fail: *ERROR*: ctrlr 0000:88:00.0 in failed state. 2025-03-15T17:50:12.512+0800 fffbd1fb0040 -1 bdev() open failed to get nvme device with transport address 0000:88:00.0 2025-03-15T17:50:12.512+0800 fffbd1fb0040 -1 bluestore(/var/lib/ceph/osd/ceph-15/) mkfs failed, (1) Operation not permitted 2025-03-15T17:50:12.512+0800 fffbd1fb0040 -1 OSD::mkfs: ObjectStore::mkfs failed with error (1) Operation not permitted 2025-03-15T17:50:12.512+0800 fffbd1fb0040 -1 ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-15/: (1) Operation not permitted 2025-03-15 17:50:12,690 fffdf1eb4dc0 DEBUG create osd.15 with 0000:88:00.0 done 2025-03-15 17:50:12,910 fffdf1eb4dc0 DEBUG systemctl: stderr Created symlink /etc/systemd/system/ceph-366437b4-0181-11f0-bcfe-f82e3f2347d5.target.wants/ceph-366437b4-0181-11f0-bcfe-f82e3f2347d5@osd.9.service → /etc/systemd/system/ceph-366437b4-0181-11f0-bcfe-f82e3f2347d5@.service. 2025-03-15 17:50:13,426 fffdf1eb4dc0 DEBUG systemctl: stderr Created symlink /etc/systemd/system/ceph-366437b4-0181-11f0-bcfe-f82e3f2347d5.target.wants/ceph-366437b4-0181-11f0-bcfe-f82e3f2347d5@osd.8.service → /etc/systemd/system/ceph-366437b4-0181-11f0-bcfe-f82e3f2347d5@.service. |
原因分析
日志显示当前节点无法创建对象,找不到剩余的可用大页内存。其实是机器大页内存分配和使用异常,常见原因是未成功分配足够数量的大页内存或与系统大页默认挂载点配置冲突。
解决方法
- 卸载/dev/hugepages的挂载。
1
umount /dev/hugepages
- 重置OSD的环境配置。
1
cephadm shell -v /lib/modules:/lib/modules -e DRIVER_OVERRIDE=uio_pci_generic sh /var/lib/ceph/spdk_lib/scripts/setup.sh reset
- 重新执行步骤2和3 进行分配大页内存,将NVME设备切换到用户态驱动。
1 2
echo 20480 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages cephadm shell -v /lib/modules:/lib/modules -e DRIVER_OVERRIDE=uio_pci_generic sh /var/lib/ceph/spdk_lib/scripts/setup.sh
- 重启集群。
1 2
systemctl daemon-reload systemctl restart ceph.target
如果部署OSD过程中就看到报错信息,重启集群后OSD仍然无法正常启动,则需要删除OSD后,重新部署OSD。
从Ceph集群中将异常OSD移除的命令参考如下:
[OSD_ID] 为要删除的OSD标识,如osd.0;[FSID]为当前Ceph集群的fsid:
1 2 3 4 5 6 7
cephadm shell ceph osd stop [OSD_ID] ceph osd out [OSD_ID] ceph osd crush remove [OSD_ID] ceph osd rm [OSD_ID] ceph orch daemon rm [OSD_ID] --force ceph auth rm [OSD_ID]
删除物理机中对应OSD的配置文件:
1 2
exit rm -rf /var/lib/ceph/[FSID]/[OSD_ID]/
父主题: 常见问题