Yunyung commented on code in PR #15516:
URL: https://github.com/apache/kafka/pull/15516#discussion_r2314007436


##########
clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java:
##########
@@ -293,14 +294,15 @@ private static MemoryRecordsBuilder 
buildRetainedRecordsInto(RecordBatch origina
                                                                  
ByteBufferOutputStream bufferOutputStream,
                                                                  final long 
deleteHorizonMs) {
         byte magic = originalBatch.magic();
+        Compression compression = 
Compression.of(originalBatch.compressionType()).build();

Review Comment:
   That’s an interesting topic.
   For 1., it would be intuitive and reasonable to use the compression level 
that is set. I’ll create a minor for it.
   
   For 2., I’m not entirely sure, but one possible case is that users may 
prefer tighter compression for cold data to save space, especially if storage 
is cost-sensitive



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to