Hi,
since we upgraded to Squid just yesterday morning, it's probably a bit
early for a reliable response. But the upgrade itself went great, no
hickups at all. We also haven't had any complaints (yet), so I guess
we're good (for now).
We use CephFS for home directories and software development, RBD for
OpenStack and RGW for k8s backups (quite low usage). I assume we would
notice issues relatively quick. We haven't heard any complaints from
customers yet either, but not all of them are on Squid yet, so hard to
say. Some new Squid deployments on customer clusters also seem stable
(we haven't had emergency calls yet wrt CephFS).
This probably doesn't help you a lot but I wanted to comment anyway. :-)
Regards,
Eugen
Zitat von William David Edwards <[email protected]>:
Hi,
For those running CephFS on Squid: how mature is the release in your
experience? I'm especially interested in those using CephFS in a
shared hosting scenario, with many directories containing many small
files.
I'm extra wary of new releases nowadays, as we've ran into several
production-impacting Ceph bugs in the past months post-upgrade,
across clusters. (Most notably, OSDs failing causing the MDS to OOM
(17.2.7 -> 17.2.8) [1], and major performance issues after upgrading
to Reef caused by the aligned 64k blocks constraint [2] (RBD though,
not CephFS).)
With kind regards,
William David Edwards
[1]: https://tracker.ceph.com/issues/69764
[2]: https://github.com/ceph/ceph/pull/54772
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]