Hi!
I have Ceph cluster version 16.2.7 with this error:
root@s-26-9-19-mon-m1:~# ceph health detail
HEALTH_WARN 1 failed cephadm daemon(s)
[WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
daemon osd.91 on s-26-8-2-1 is in error state
But I don't have that osd anymore. I deleted it.
root@s-26-9-19-mon-m1:~# ceph orch ps|grep s-26-8-2-1
crash.s-26-8-2-1 s-26-8-2-1 running (2d)
1h ago 3M 9651k - 16.2.7 cc266d6139f4 2ed049f74b66
node-exporter.s-26-8-2-1 s-26-8-2-1 *:9100 running (2d)
1h ago 3M 24.3M - 0.18.1 e5a616e4b9cf 817cc5370e7e
osd.90 s-26-8-2-1 running (2d)
1h ago 3M 25.6G 4096M 16.2.7 cc266d6139f4 beb2ea3efb3b
root@s-26-8-2-1:~# cephadm ls|grep osd
"name": "osd.90",
"systemd_unit": "[email protected]",
"service_name": "osd",
Can you please tell me how to reset this error message?
WBR,
Fyodor
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]