[
https://issues.apache.org/jira/browse/SPARK-49442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17877256#comment-17877256
]
Jungtaek Lim commented on SPARK-49442:
--------------------------------------
Could you give a try with setting SQL config
"spark.sql.streaming.kafka.useDeprecatedOffsetFetching" to "false"? We are
moving to AdminClient to retrieve the partition/offset in microbatch planning.
[https://archive.apache.org/dist/spark/docs/3.3.2/structured-streaming-kafka-integration.html#offset-fetching]
We still use Kafka consumer for executor (each microbatch) but I don't know
whether the massive requests are from driver vs executor.
Also, if you want to set the kafka related config, use the prefix "kafka.",
e.g. *kafka.metadata.max.age.ms*
> Complete Metadata requests on each micro batch causing Kafka issues
> -------------------------------------------------------------------
>
> Key: SPARK-49442
> URL: https://issues.apache.org/jira/browse/SPARK-49442
> Project: Spark
> Issue Type: Bug
> Components: Structured Streaming
> Affects Versions: 3.3.2
> Reporter: vipin Kumar
> Priority: Major
> Labels: Kafka, spark-streaming-kafka
>
> We have noticed that spark does complete metadata requests on each micro
> batch and this is causing high metadata requests on small micro batch
> intervals .
>
> For example Kafka with 1900 partitions and 10 sec micro batch we are seeing
> order of
> ~{*}360K{*} metadata requests / sec
> Same with job with 60 sec micro batch we are observing *~60K* meta data
> requests.
>
> Metadata requests are controlled by *metadata.max.age.ms* but these config
> have no effect on spark consumers by default its 5 mins still we are seeing
> these huge requests.
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]