fvaleri commented on code in PR #16475:
URL: https://github.com/apache/kafka/pull/16475#discussion_r1664522945
##########
core/src/test/scala/unit/kafka/tools/DumpLogSegmentsTest.scala:
##########
@@ -243,6 +244,32 @@ class DumpLogSegmentsTest {
assertEquals(Map.empty, errors.shallowOffsetNotFound)
}
+ @Test
+ def testDumpRemoteLogMetadataRecords(): Unit = {
Review Comment:
Hi @divijvaidya, thanks. Most of the suggested tests are implemented now.
> 3. metadata contains multiple records (testing with 2 is fine), one batch
I guess this is the original test.
> do we compact metadata? If yes, can you add cases where segments is a
compacted segment (has some offsets missing).
No, the cleanup policy for this topic is hard coded to delete. See
[here](https://github.com/apache/kafka/blob/3.7.1/storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/TopicBasedRemoteLogMetadataManager.java#L491).
> do we compress metadata? If test, can you add cases which validate correct
deserialization for different compression types
No, they are not compressed. See for example
[here.](https://github.com/apache/kafka/blob/3.7.1/core/src/main/java/kafka/log/remote/RemoteLogManager.java#L746-L750)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]