Hi,

the manual deployment only makes sense if you don't use the orchestrator, otherwise they would interfer. You can stil create OSDs manually with cephadm but it will report those as stray daemons not managed by cephadm, so your cluster won't get into a healthy state. It can be helpful for learning purposes, of course. But it also contains references to Filestore which has been deprecated for a while now. I believe it can't hurt to understand what's going on under the hood (cephadm uses ceph-volume to deploy OSDs), so playing around with manual deployment definitely makes sense to me. But with a larger cluster it makes more sense to become familiar with cephadm and the automatic deployment. As for the second part (ceph orch daemon add...) this also only makes sense if you manage single OSDs. So this would be already cephadm managed and orchestrated, but in a multi-host cluster with multiple OSDs per host you would want to use the drivegroup config [3] and let cephadm handle the rest. But again, it definitely makes sense to familiarize yourself with cephadm.

Does that clarify it a bit?

[3] https://docs.ceph.com/en/quincy/cephadm/services/osd/#drivegroups

Zitat von Giuliano Maggi <[email protected]>:

Hi,

I am learning about Ceph, and I found this two ways of adding OSDs:

https://docs.ceph.com/en/quincy/install/manual-deployment/#short-form <https://docs.ceph.com/en/quincy/install/manual-deployment/#short-form> (via LVM)
AND
https://docs.ceph.com/en/quincy/cephadm/services/osd/#creating-new-osds <https://docs.ceph.com/en/quincy/cephadm/services/osd/#creating-new-osds> (ceph orch daemon add osd *<host>*:*<device-path>*)

Are these two ways equivalents?

Thanks,
Giuliano,
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to