I don´t have the largest data farm around, but I do have a fairly eclectic set of files and directories.

I have seen no differences in cephfs moving to squid, though I do have some issues that are common to Pacific and Squid.

1. cephs cannot gracefully handle clients who go off/online. Specifically sleeping clients. It causes ceph resources to leak and ceph to whine. The ceph-side resources have to be closed out manually.

2. I have some clients where I've tried using ceph-nfs where some resources do not appear properly. In particular, files that return no data even though in cephfs itself the files are not empty.

Some recent posts on the list here have potential to maybe fix #2, but I haven't tried them yet.

For permanent cephfs native clients I have no complaints.

   Tim

On 9/17/25 09:37, William David Edwards wrote:
Hi,

For those running CephFS on Squid: how mature is the release in your experience? I'm especially interested in those using CephFS in a shared hosting scenario, with many directories containing many small files.

I'm extra wary of new releases nowadays, as we've ran into several production-impacting Ceph bugs in the past months post-upgrade, across clusters. (Most notably, OSDs failing causing the MDS to OOM (17.2.7 -> 17.2.8) [1], and major performance issues after upgrading to Reef caused by the aligned 64k blocks constraint [2] (RBD though, not CephFS).)

With kind regards,

William David Edwards

[1]: https://tracker.ceph.com/issues/69764
[2]: https://github.com/ceph/ceph/pull/54772
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to