Hi,

sorry forgot to add this.

Ceph release: 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)
The cluster consists out of 360 OSDs and the index pool has 128 PGs and 
auto-resharding is enabled.


Cheers,
Florian

> On 11. Sep 2024, at 18:18, Anthony D'Atri <[email protected]> wrote:
> 
> Which Ceph release?  How many OSDs and how many PBs in the index pool?  
> Auto-resharding enabled?
> 
>> On Sep 11, 2024, at 10:24 AM, Florian Schwab <[email protected]> 
>> wrote:
>> 
>> Hi everyone,
>> 
>> I hope maybe someone here has an idea what is happening here or can give 
>> some pointers how to debug it further.
>> 
>> We currently have a bucket which has large omap objects. Following this 
>> guide (https://access.redhat.com/solutions/6450561) we are able to identify 
>> the bucket etc.
>> 
>> $ ceph health detail
>> HEALTH_WARN 25 large omap objects
>> [WRN] LARGE_OMAP_OBJECTS: 25 large omap objects
>>   25 large objects found in pool 'ceph-objectstore.rgw.buckets.index'
>>   Search the cluster log for 'Large omap object found' for more details.
>> 
>> $ radosgw-admin metadata list --metadata-key bucket.instance | grep -i XXX
>>   “YYY:XXX”,
>> 
>> $ radosgw-admin bilog list --bucket-id=“XXX" --bucket=“YYY" 
>> --max-entries=600000 | grep -c op_id 
>> 600000
>> 
>> So far so good! But when try to trim the bilog we get the following error:
>> 
>> $ radosgw-admin bilog trim --bucket-id=“XXX" --bucket=“YYY" 
>> ERROR: trim_bi_log_entries(): (2) No such file or directory
>> 
>> 
>> The bucket itself doesn’t show any issues - all S3 operations are working.
>> 
>> 
>> Thanks for any input!
>> 
>> 
>> Cheers,
>> Florian
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
> 
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to