Hi Guys,
I have 7 OSD nodes with 10X15T NVME disk on each OSD node.
To start with , I want to use only 8X15T disk on each osd node and keep
2X15 disk spare in case of any disk failure and recovery event.
I am going to use the 4X2 EC CephFS data pool to store data.
So, with the above set-up, what will be the optimal number of placement
groups per OSD.
As per the PG calculator :-
( 8X7X100 ) / 6 = 933.33 nearest power of 2 is 1024.
With 1024 placement groups distributed across 56 OSDs, that evaluates to
approximately 18 placement groups per OSD.
I don't think its optimal as Ceph doc recommends 50-100 PG per OSD.
So, am I doing something wrong? or missing something white calculating the
number of PG per OSD.
Also, will it be best practice to keep 2X15T disk spare on each OSD or
should I use all of them.
Also, I am going to deploy 7 OSD nodes across 4 Racks and will be using the
failure domain as "rack" to have Ceph handle the entire rack failure.
I hope this will provide more protection.
Please advise.
Thanks,
Gagan
Thanks,
Gagan.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]