[
https://issues.apache.org/jira/browse/KAFKA-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17941714#comment-17941714
]
Lianet Magrans commented on KAFKA-18216:
----------------------------------------
Hey [~frankvicky] ! Since we already have dates for 4.1
(https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+4.1.0), just
checking if you are still planning to work on this one? I expect here we need
to start by investigating to fully understand the current situation. It's
marked as Minor but I think it's interesting to understand if it's an issue in
the metric calculation (or if we have something else underneath affecting the
lag). Thanks!
> High water mark or last stable offset aren't always updated after a fetch
> request is completed
> ----------------------------------------------------------------------------------------------
>
> Key: KAFKA-18216
> URL: https://issues.apache.org/jira/browse/KAFKA-18216
> Project: Kafka
> Issue Type: Improvement
> Components: clients, consumer
> Reporter: Philip Nee
> Assignee: TengYao Chi
> Priority: Minor
> Labels: consumer-threading-refactor
> Fix For: 4.1.0
>
>
> We've noticed AsyncKafkaConsumer doesn't always update the high water
> mark/LSO followed by handling a successful fetch response. And we know
> consumer lag metrics is calculated by HWM/LSO - current fetched position. We
> are suspecting this could have a subtle effect into how consumer lag is
> recorded, which might have a slight impact into the accuracy of client
> metrics reporting.
> The consumer records consumer lag when reading the fetched record
> The consumer updates the HWM/LSO when the background thread completes the
> fetched request.
> In the original implementation, the fetcher consistently updates the HWM/LSO
> after handling the completed fetch request.
> In the new implementation, due to the async threading model, we can't
> guarantee the sequence of the event.
> This defect is affecting neither performance nor correctness and is therefore
> marked as "Minor"
>
> This can be easily reproduced using the java-produce-consumer-demo.sh
> example. Ensure to produce enough records (I use 200000000 records, less is
> fine as well). Custom logging is required.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)