Hi,

can you share where that comment comes from? I can't find it in the docs or on github. The only place I found is this Oracle guide which is for Ceph Luminous, that is quite old:

https://docs.oracle.com/en/operating-systems/oracle-linux/ceph-storage/ceph-luminous-UsingCephStorageforOracleLinux.html

I have no idea if newer versions support subtree checking though, we've been looking into ganesha many years ago and it didn't work for us at that time at all. I have that on my agenda to give it another try, but I don't find the time at the moment. Hopefully someone else has more insights for you.

Regards,
Eugen

Zitat von Davíð Steinn Geirsson <[email protected]>:

Hey all,

When I moved my file server shares to CephFS, I set each share on its own CephFS. The reason was this comment in the nfs-ganesha example config:

    # Note that FSAL_CEPH does not support subtree checking, so there is
    # no way to validate that a filehandle presented by a client is
    # reachable via an exported subtree.
    #
    # For that reason, we just export "/" here.

Now, this is fine for low numbers of shares, but as they have grown it feels a bit overkill to be creating two or more new pools (metadata + data + sometimes another EC data) for each share. Tuning the PG numbers for those pools is also kind of a pain.

I'm wondering, would using a subvolume for the share provide the needed security isolation?

Best,
Davíð
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to