chia7712 commented on code in PR #15516:
URL: https://github.com/apache/kafka/pull/15516#discussion_r2312529139


##########
clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java:
##########
@@ -293,14 +294,15 @@ private static MemoryRecordsBuilder 
buildRetainedRecordsInto(RecordBatch origina
                                                                  
ByteBufferOutputStream bufferOutputStream,
                                                                  final long 
deleteHorizonMs) {
         byte magic = originalBatch.magic();
+        Compression compression = 
Compression.of(originalBatch.compressionType()).build();

Review Comment:
   1) I'm not sure whether there has been any discussion about the compression 
level used during compaction. The current implementation just applies the 
default level, but perhaps it should respect the topic's configured compression 
level
   
   2) Another brainstorm is to introduce a flag that allows compaction to use 
different compression. This would give users option to choose a different 
compression algorithm for older data.
   
   @mimaison @junrao @showuon WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to