Hi,
I'm (still) testing upgrading from Luminous to Nautilus and ran into the
following situation:
The lab-setup I'm testing in has three OSD-Hosts.
If one of those hosts dies the store.db in /var/lib/ceph/mon/ on all my
Mon-Nodes starts to rapidly grow in size until either the OSD-host comes
back up or disks are full.
On another cluster that's still on Luminous I don't see any growth at all.
Is that a difference in behaviour between Luminous and Nautilus or is that
caused by the lab-setup only having three hosts and one lost host causing
all PGs to be degraded at the same time?
--
Cheers,
Hardy
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]