Use

ceph fs set <fs_name> down true

after this all mdses of fs fs_name will become standbys. Now you can cleanly 
remove everything.

Wait for the fs to be shown as down in ceph status, the command above is 
non-blocking but the shutdown takes a long time. Try to disconnect all clients 
first.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Francois Legrand <[email protected]>
Sent: 22 June 2020 15:56:56
To: ceph-users
Subject: [ceph-users] How to remove one of two filesystems

Hello,
I have a ceph cluster (nautilus 14.2.8) with 2 filesystems and 3 mds.
mds1 is managing fs1
mds2 manages fs2
mds3 is standby

I want to completely remove fs1.
It seems that the command to use is ceph fs rm fs1 --yes-i-really-mean-it
and then delete the data and metadata pools with ceph osd pool delete
but in many threads I noticed that you must shutdown the mds before
running  ceph fs rm.
Is it still the case ?
What happens in my configuration (I have 2 fs) ? If I stop mds1, the
mds3 will take the management. If I stop mds3 what will mds2 do (try to
manage the 2 fs or continue only with fs2) ?
Thanks for your advices.
F.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to