[
https://issues.apache.org/jira/browse/KAFKA-13152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18011515#comment-18011515
]
Shashank commented on KAFKA-13152:
----------------------------------
Hi [~mjsax], I would like to finish the last part of this issue to complete
KIP-770. I looked at the previous work and reused most/all of the work. In
addition to this, I made several changes and opened a [new PR
|https://github.com/apache/kafka/pull/20292]. Once reviewed, I will try and
plan to complete it asap so that it can be planned for the 4.2 release.
I believe the feature was mostly coded, but was left unreviewed and also
requires additional testing. I request the PR to be reviewed whenever you get
the chance. Thanks!
> Replace "buffered.records.per.partition" & "cache.max.bytes.buffering" with
> "{statestore.cache}/{input.buffer}.max.bytes"
> -------------------------------------------------------------------------------------------------------------------------
>
> Key: KAFKA-13152
> URL: https://issues.apache.org/jira/browse/KAFKA-13152
> Project: Kafka
> Issue Type: Improvement
> Components: streams
> Reporter: Guozhang Wang
> Assignee: Shashank
> Priority: Major
> Labels: kip
>
> The current config "buffered.records.per.partition" controls how many records
> in maximum to bookkeep, and hence it is exceed we would pause fetching from
> this partition. However this config has two issues:
> * It's a per-partition config, so the total memory consumed is dependent on
> the dynamic number of partitions assigned.
> * Record size could vary from case to case.
> And hence it's hard to bound the memory usage for this buffering. We should
> consider deprecating that config with a global, e.g. "input.buffer.max.bytes"
> which controls how much bytes in total is allowed to be buffered. This is
> doable since we buffer the raw records in <byte[], byte[]>.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)