I tried to drain the host by running
sudo ceph orch host drain node-three
But it stuck at removing osd with the below status
node-one@node-one:~$ sudo ceph orch osd rm status
OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT
2 node-three draining 1 False False False 2024-04-20 20:30:34.689946
It's a test set-up and I don't have anything written to the OSD.
Here is my ceph status
node-one@node-one:~$ sudo ceph status
cluster:
id: f5ac585a-fe8e-11ee-9452-79c779548dac
health: HEALTH_OK
services:
mon: 2 daemons, quorum node-one,node-two (age 21m)
mgr: node-two.zphgll(active, since 9h), standbys: node-one.ovegfw
osd: 3 osds: 3 up (since 42m), 3 in (since 42m); 1 remapped pgs
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 577 KiB
usage: 81 MiB used, 30 GiB / 30 GiB avail
pgs: 2/6 objects misplaced (33.333%)
1 active+clean+remapped
Is it normal for an orch osd rm drain to take so long?
With only 3 OSDs and a default crush rule with replicated size 3 there's no target where to drain the OSD to. If this is just a test cluster you could to reduce min_size
to 1 and size
to 2. But please don't ever do that in production.