Anthony's got it right.  The RocksDB LZ4 compression will be incremental as new compactions takes place.  You can manually issue a full compaction to trigger it if you'd like.  If you want to switch back, just remove the compression flag and restart the OSD.  New SST files will then be written out uncompressed.

Mark


On 8/9/25 5:35 PM, Anthony D'Atri wrote:
I *think* this will happen over time as each OSD’s RocksDB compacts, which 
might be incremental.

Also, could anybody clarify whether this setting is `compression_algorithm` from
https://docs.ceph.com/en/squid/rados/operations/pools/#setting-pool-values
https://docs.ceph.com/en/squid/rados/configuration/bluestore-config-ref/#confval-bluestore_compression_algorithm
or if that's something different (e.g. if that's "actual data" instead of 
"metadata")?

I suspect it's different, because `ceph config show-with-defaults osd.6 | grep 
compression` reveals:

    bluestore_compression_algorithm                             snappy
    bluestore_compression_mode                                  none
    bluestore_rocksdb_options                                   
compression=kLZ4Compression,...

 From this it looks like `bluestore_compression*` is the "Inline compression" for data 
(https://docs.ceph.com/en/reef/rados/configuration/bluestore-config-ref/#inline-compression), and 
`bluestore_rocksdb_options` is what the "RocksDB compression" is about.
I believe so.

Still the question remains on how to bring all existing data over to be 
compressed.
I think the only way is to rewrite the data. There are scripts out there to do 
this for CephFS, where the same process should work as can be used when 
migrating existing data to a new data pool.  For RBD and RGW, re-write from a 
client.  With RBD you might do something like

rbd export myimage | rbd import - myimage.new; rbd rm myimage ; rbd rename 
myimage.new myimage

dance, with caution regarding space, names, watchers, client attachments, phase 
of moon, etc.

With RGW maybe something like rclone or Chorus to copy into a new bucket, rm 
the old bucket.


Thanks!
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

--
Best Regards,
Mark Nelson
Head of Research and Development

Clyso GmbH
p: +49 89 21552391 12 | a: Minnesota, USA
w: https://clyso.com | e: [email protected]

We are hiring: https://www.clyso.com/jobs/
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to