[ https://issues.apache.org/jira/browse/KAFKA-19020 ]
Jimmy Wang deleted comment on KAFKA-19020:
------------------------------------
was (Author: JIRAUSER300327):
[~apoorvmittal10] The current {{maxFetchRecords}} limit isn't strict because
{{lastOffsetFromBatchWithRequestOffset()}} may return oversized offsets. I
think tightening this could cause batch splits, am I on the right track?
Additionally, do you think it is necessary to forcefully complete the
DelayedShareFetch when the maxFetchRecords limit is satisfied (similar to the
logic in isMinBytesSatisfied())?
These are my early thoughts—I’ll run some tests to see how it works.
> Handle strict max fetch records in share fetch
> ----------------------------------------------
>
> Key: KAFKA-19020
> URL: https://issues.apache.org/jira/browse/KAFKA-19020
> Project: Kafka
> Issue Type: Sub-task
> Reporter: Apoorv Mittal
> Assignee: Jimmy Wang
> Priority: Major
> Fix For: 4.2.0
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)