[
https://issues.apache.org/jira/browse/KAFKA-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17566253#comment-17566253
]
Doguscan Namal commented on KAFKA-13953:
----------------------------------------
* Changing the leaders to the toher replicas did not resolve the issue.
Therefore I am right to assume that data is corrupted in their disk too right?
** Given that data being corrupted in the disk and replicated to the other
brokers, I believe that is very unlikely to happen.
** Therefore, the data coming from the ProduceRequest, or the in memory
representation of it must be corrupted.
*** It verifies the batch level CRC is correct, but if the Producer set that
CRC after the batch is already corrupted, then broker would not find it out
right?
*** Other option is that, if we convert the incoming ProduceRequest, like
message down conversion, then the in memory representation of the request might
have been corrupted right?
* There are two corrupted areas, one is the message size field of the record,
and the other one is the valueSize of the 3rd header. Bytes in between are fine.
** -155493822 is the message size
** 1991988702 as the headerValue size
** I couldn't find anything special about these numbers, what shall I check?
Any idea for the next steps or how to reproduce?
> kafka Console consumer fails with CorruptRecordException
> ---------------------------------------------------------
>
> Key: KAFKA-13953
> URL: https://issues.apache.org/jira/browse/KAFKA-13953
> Project: Kafka
> Issue Type: Bug
> Components: consumer, controller, core
> Affects Versions: 2.7.0
> Reporter: Aldan Brito
> Priority: Blocker
>
> Kafka consumer fails with corrupt record exception.
> {code:java}
> opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server *.*.*.*:<port>
> --topic BQR-PULL-DEFAULT --from-beginning >
> /opt/nokia/kafka-zookeeper-clustering/kafka/topic-data/tmpsdh/dumptest
> [{*}2022-05-15 18:34:15,146]{*} ERROR Error processing message, terminating
> consumer process: (kafka.tools.ConsoleConsumer$)
> org.apache.kafka.common.KafkaException: Received exception when fetching the
> next record from BQR-PULL-DEFAULT-30. If needed, please seek past the record
> to continue consumption.
> at
> org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1577)
> at
> org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1700(Fetcher.java:1432)
> at
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:684)
> at
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:635)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1276)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
> at
> kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:438)
> at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:104)
> at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:78)
> at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:55)
> at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size
> 0 is less than the minimum record overhead (14)
> Processed a total of 15765197 messages {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)