Hi,

I needed to add more spinning HDD to my nodes ( SuperMicro SSG-641E-E1CR36L)
and made the mistake of NOT setting up osd_auto_discovery to "false" so
ceph created OSDs on all 5 new spinning  HDD

This was an issue as I wanted to configure the OSDs the same as existing
ones ( i.e. with WAL/DB on NVME) when the other 37 drives arrive

No big harm done though because I could zap them and reconfigure ( after
running ceph orch apply osd --all-available-devices --unmaged=true) when I
will receive the remaining 37 drives ( as I will add 6 drives on each
server)

The interesting part is that, for whatever reason, one of the existing SSD
based OSD is down now because the SSD drive it used changed from
/dev/sdp to /dev/sdu therefore
there is no "block" entry under /var/lib/ceph/FSID/osd.XX

I am not sure why adding spinning disks mess up the order/naming of sdd
disks

I would appreciate some advice regarding the best course of action
to reconfigure  the OSD that is down

The cluster is healthy , not busy, with all the other OSDs  working as
expected
Has 7 hosts  with 12 SSD and 6 HDD drives each  ( one of them is the one
with issues),
 2 EC 4+2 pool , 2 MDS and few metadata pools replicated on NVME
There are also 3 NVME disks on each dedicated to pools


Many thanks
Steven
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to