junrao commented on code in PR #15516:
URL: https://github.com/apache/kafka/pull/15516#discussion_r2380478649
##########
clients/src/main/java/org/apache/kafka/common/record/MemoryRecords.java:
##########
@@ -293,14 +294,15 @@ private static MemoryRecordsBuilder
buildRetainedRecordsInto(RecordBatch origina
ByteBufferOutputStream bufferOutputStream,
final long
deleteHorizonMs) {
byte magic = originalBatch.magic();
+ Compression compression =
Compression.of(originalBatch.compressionType()).build();
Review Comment:
@chia7712 : Previously, we considered using the topic level compression type
instead of the one in the original batch. There is one subtle issue on batch
size. During compaction, we group a set of segments so that the total size
doesn't exceed 2GB. If we use a different compression type, the compacted data
could be exceeding the max segment limit and failing the index append.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]