Hello all
Using 19.2.2, when enabling bdev_ioring and rebooting host then osd never comes
online as below…. Why?
Also Dasboard not matching with cli?
Host output in dashboard… showing running. Tried failing manager too but it
same.

Dashboard showing down

# ceph health detail
HEALTH_WARN 4 osds down; 1 host (4 osds) down; Degraded data redundancy:
12998/38994 objects degraded (33.333%), 287 pgs degraded, 801 pgs undersized
[WRN] OSD_DOWN: 4 osds down
osd.4 (root=default,host= host07n) is down
osd.5 (root=default,host= host07n) is down
osd.6 (root=default,host= host07n) is down
osd.7 (root=default,host= host07n) is down
[WRN] OSD_HOST_DOWN: 1 host (4 osds) down
host host07n (root=default) (4 osds) is down
# ceph orch ps |grep -v running. ————> not showing anything down….
NAME HOST
PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION
IMAGE ID CONTAINER ID
# systemctl list-units |grep -i osd
var-lib-ceph-osd-ceph\x2d4.mount
loaded active mounted /var/lib/ceph/osd/ceph-4
var-lib-ceph-osd-ceph\x2d5.mount
loaded active mounted /var/lib/ceph/osd/ceph-5
var-lib-ceph-osd-ceph\x2d6.mount
loaded active mounted /var/lib/ceph/osd/ceph-6
var-lib-ceph-osd-ceph\x2d7.mount
loaded active mounted /var/lib/ceph/osd/ceph-7
[email protected]
loaded activating auto-restart Ceph osd.4 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
[email protected]
loaded activating auto-restart Ceph osd.5 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
[email protected]
loaded activating auto-restart Ceph osd.6 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
[email protected]
loaded activating auto-restart Ceph osd.7 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
system-ceph\x2dosd.slice
loaded active active Slice /system/ceph-osd
ceph-osd.target
loaded active active ceph target allowing to
start/stop all [email protected] instances at once
Came to running …..
# systemctl list-units |grep -i osd
var-lib-ceph-osd-ceph\x2d4.mount
loaded active mounted /var/lib/ceph/osd/ceph-4
var-lib-ceph-osd-ceph\x2d5.mount
loaded active mounted /var/lib/ceph/osd/ceph-5
var-lib-ceph-osd-ceph\x2d6.mount
loaded active mounted /var/lib/ceph/osd/ceph-6
var-lib-ceph-osd-ceph\x2d7.mount
loaded active mounted /var/lib/ceph/osd/ceph-7
[email protected]
loaded active running Ceph osd.4 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
[email protected]
loaded active running Ceph osd.5 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
[email protected]
loaded active running Ceph osd.6 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
[email protected]
loaded active running Ceph osd.7 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
system-ceph\x2dosd.slice
loaded active active Slice /system/ceph-osd
ceph-osd.target
loaded active active ceph target allowing to
start/stop all [email protected] instances at once
Failed …..
# systemctl list-units |grep -i osd
var-lib-ceph-osd-ceph\x2d4.mount
loaded active mounted /var/lib/ceph/osd/ceph-4
var-lib-ceph-osd-ceph\x2d5.mount
loaded active mounted /var/lib/ceph/osd/ceph-5
var-lib-ceph-osd-ceph\x2d6.mount
loaded active mounted /var/lib/ceph/osd/ceph-6
var-lib-ceph-osd-ceph\x2d7.mount
loaded active mounted /var/lib/ceph/osd/ceph-7
● [email protected]
loaded failed failed Ceph osd.4 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
● [email protected]
loaded failed failed Ceph osd.5 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
● [email protected]
loaded failed failed Ceph osd.6 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
● [email protected]
loaded failed failed Ceph osd.7 for
3b850efe-5dec-11f0-af3c-c1a764f7824e
system-ceph\x2dosd.slice
loaded active active Slice /system/ceph-osd
ceph-osd.target
loaded active active ceph target allowing to
start/stop all [email protected] instances at once
Regards
Dev
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]