Hi Eugen,

Thanks for your insight here. We will go with moving the data to another pool.

Kind Regards

From: Eugen Block <[email protected]>
Date: Friday, 12 September 2025 at 5:04 pm
To: [email protected] <[email protected]>
Subject: [EXT] [ceph-users] Re: Is there a faster way to merge PGs?
External email: Please exercise caution

Hit "send" too soon...

I meant to add that apparently, there's no way to increase pg merge.
Moving data to a different pool is most likely much faster if you have
the capacity (which you seem to have if you want to get rid of 1700
OSDs ;-) ).


Zitat von Eugen Block <[email protected]>:

> Hi,
>
> I couldn't find any newer information than this:
>
>> PG merging works similarly to splitting, except that internally the
>> pg_num value is always decreased a single PG at a time. Merging is
>> a more complicated and delicate process that requires IO to the PG
>> to be paused for a few seconds, and doing merges a single PG at a
>> time allows the system to both minimize the impact and simplify the
>> overall complexity of the process.
>
> https://ceph.io/en/news/blog/2019/new-in-nautilus-pg-merging-and-autotuning
>
> Regards,
> Eugen
>
> Zitat von Justin Mammarella <[email protected]>:
>
>> Hi folks,
>>
>> We are in the process of shrinking a CEPH cluster from around 2000
>> osds to 300 osds.
>>
>> We need to reduce our pg count from 16384 to 8192.
>>
>> After setting pg_num to 8192, it’s currently it’s taking 3 hours to
>> reduce the pool’s PG count by one.
>> Which given the number of PGs, will take over 2 years to complete.
>>
>> Are there any tunables to increase the speed of pg merging?
>>
>> Our alternative is to create a new pool and transfer the data
>>
>> Regards,
>>
>> Justin
>>
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to