[ 
https://issues.apache.org/jira/browse/KAFKA-19613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18017538#comment-18017538
 ] 

Uladzislau Blok edited comment on KAFKA-19613 at 9/1/25 7:43 PM:
-----------------------------------------------------------------

I'd separate those cases:
 # CorruptMessageException when client validates records e.g. 
FetchCollector#fetchRecords and CompletedFetch#maybeEnsureValid under the hood. 
For this case your proposal makes sense, and user can decide how to recoverĀ 
 # CorruptMessageException on broker response. This still can be the case (see 
stack trace [https://github.com/confluentinc/kafka-streams-examples/issues/524] 
) and then we need to verify, if there is possibility of loosing messages in 
case of seeking. Currently it looks to me as if broker can't read the log, 
partition data on response will be completely empty (as I understand no partial 
reading, when there is two records and fail at same time)

UPD: Checked what Lianet wrote about this error in KS ticket:
{quote}I believe so, a fetch request including a partition and offset should 
fail with CorruptMessage if any record that needs to be included on that 
response (according to the fetch max bytes limits etc) is found corrupted on 
the broker when reading the log to generate the fetch response (but to double 
check on the broker-side handling of fetch in case I'm missing something)

get it on the consumer path: I expect it means the broker identified the data 
as corrupted when reading from the log -> I would expect this is not retriable 
(ex. disk corrupted)
{quote}
May be it makes sense to even separate those cases (broker can't read and 
broker can read, but client rejects with CRC validation) and report first one 
with different exception?

[~mjsax] [~lianetm] What do you think?


was (Author: JIRAUSER309258):
I'd separate those cases:
 # CorruptMessageException when client validates records e.g. 
FetchCollector#fetchRecords and CompletedFetch#maybeEnsureValid under the hood. 
For this case your proposal makes sense, and user can decide how to recoverĀ 
 # CorruptMessageException on broker response. This still can be the case (see 
stack trace [https://github.com/confluentinc/kafka-streams-examples/issues/524] 
) and then we need to verify, if there is possibility of loosing messages in 
case of seeking. Currently it looks to me as if broker can't read the log, 
partition data on response will be completely empty (as I understand no partial 
reading, when there is two records and fail at same time)

UPD: Checked what Lianet wrote about this error in KS ticket:


{quote}I believe so, a fetch request including a partition and offset should 
fail with CorruptMessage if any record that needs to be included on that 
response (according to the fetch max bytes limits etc) is found corrupted on 
the broker when reading the log to generate the fetch response (but to double 
check on the broker-side handling of fetch in case I'm missing something)

get it on the consumer path: I expect it means the broker identified the data 
as corrupted when reading from the log -> I would expect this is not retriable 
(ex. disk corrupted)
{quote}
May be it makes sense to even separate those cases (broker can't read)?

> Expose consumer CorruptRecordException as case of KafkaException
> ----------------------------------------------------------------
>
>                 Key: KAFKA-19613
>                 URL: https://issues.apache.org/jira/browse/KAFKA-19613
>             Project: Kafka
>          Issue Type: Improvement
>          Components: clients
>            Reporter: Uladzislau Blok
>            Assignee: Uladzislau Blok
>            Priority: Minor
>              Labels: need-kip
>         Attachments: corrupted_records.excalidraw.png
>
>
> As part of analysis of KAFKA-19430 , we decided it would be useful to expose 
> root case of consumer request failure (e.g. currently we see just 
> KafkaException instead of CorruptRecordException)
> The idea is to not change public API, but expose root case as a filed of 
> KafkaException



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to