[
https://issues.apache.org/jira/browse/KAFKA-10034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17838447#comment-17838447
]
Ramiz Mehran commented on KAFKA-10034:
--------------------------------------
"Firstly, this configuration is a cap on the maximum uncompressed record batch
size." should be changed to "Firstly, this configuration is a cap on the
maximum uncompressed record size."
> Clarify Usage of "batch.size" and "max.request.size" Producer Configs
> ---------------------------------------------------------------------
>
> Key: KAFKA-10034
> URL: https://issues.apache.org/jira/browse/KAFKA-10034
> Project: Kafka
> Issue Type: Improvement
> Components: docs, producer
> Reporter: Mark Cox
> Assignee: Badai Aqrandista
> Priority: Minor
>
> The documentation around the producer configurations "batch.size" and
> "max.request.size", and how they relate to one another, can be confusing.
> In reality, the "max.request.size" is a hard limit on each individual record,
> but the documentation makes it seem this is the maximum size of a request
> sent to Kafka. If there is a situation where "batch.size" is set greater
> than "max.request.size" (and each individual record is smaller than
> "max.request.size") you could end up with larger requests than expected sent
> to Kafka.
> There are a few things that could be considered to make this clearer:
> # Improve the documentation to clarify the two producer configurations and
> how they relate to each other
> # Provide a producer check, and possibly a warning, if "batch.size" is found
> to be greater than "max.request.size"
> # The producer could take the _minimum_ of "batch.size" or "max.request.size"
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)