Hi Frank.
Check your cluster for inactive/incomplete placement groups. I saw similar 
behavior on Octopus when some pgs stuck in incomplete/inactive or peering state.

________________________________
From: Frank Schilder <[email protected]>
Sent: Monday, May 3, 2021 3:42:48 AM
To: [email protected] <[email protected]>
Subject: [ceph-users] OSD slow ops warning not clearing after OSD down

Dear cephers,

I have a strange problem. An OSD went down and recovery finished. For some 
reason, I have a slow ops warning for the failed OSD stuck in the system:

    health: HEALTH_WARN
            430 slow ops, oldest one blocked for 36 sec, osd.580 has slow ops

The OSD is auto-out:

| 580 | ceph-22 |    0  |    0  |    0   |     0   |    0   |     0   | 
autoout,exists |

It is probably a warning dating back to just before the fail. How can I clear 
it?

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to