[
https://issues.apache.org/jira/browse/KAFKA-19403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
karthickthavasiraj09 updated KAFKA-19403:
-----------------------------------------
Description:
* We are experiencing significant slowness while reading data from Azure Event
Hubs using Azure Databricks. After conducting initial analysis with the
Microsoft support team, they confirmed that the root cause appears to be
related to Kafka. We are reaching out for your assistance in investigating and
resolving this issue.
Below are the key findings and debug logs provided by the Microsoft team:
** The data read operation took *49 minutes* in total.
** Out of this, only one task(spark task) {*}Task 143 alone took 46
minutes{*}, indicating a bottleneck in this specific task.
** The job duration was {*}49 minutes and 30 seconds{*}.
Relevant Log Snippets:
25/04/15 14:21:44 INFO KafkaBatchReaderFactoryWithRowBytesAccumulator:
Creating Kafka reader topicPartition=<topic-name>-0 fromOffset=16511904
untilOffset=16658164,
for queryId=dd660d4d-05cc-4a8e-8f93-d202ec78fec3
runId=af7eb711-7310-4788-85b7-0977fc0756b7 batchId=73 taskId=143 partitionId=0
25/04/15 15:07:21 INFO KafkaDataConsumer:
>From Kafka topicPartition=<topic-name>-0
>groupId=spark-kafka-source-da79e0fc-8ee5-40f5-a127-7b31766b3022--1737876659-executor
read 146260 records through 4314 polls (polled out 146265 records), taking
2526471821132 ns,
over a timespan of 2736294068630 ns.
Additionally, the thread stack trace indicates that the task was mostly waiting
on Kafka to respond. See the following thread details captured during the
slowness:
Executor task launch worker for task 0.0 in stage 147.0 (TID 143)
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
...
kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1759)
...
org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.fetchRecord(KafkaDataConsumer.scala:517)
*We kindly request the Kafka team to look into this issue. The task appears to
be blocked for an extended period during Kafka polling. Any insights into why
Kafka is not responding promptly or recommendations for configuration changes
or optimizations would be greatly appreciated.*
*Please let us know if any additional information or diagnostics are required
from our end.*
*Thank you for your support.*
was:
We had an issue while reading data from the Azure Event hubs through Azure
Databricks. After working with Microsoft team they confirmed that there's an
issue from Kafka side. I'm sharing the debug logs shared by the Microsoft team
below,
That took 49m, we see that task 143 takes 46 mins _(out of the job duration of_
_49m 30s)_
{_}25/04/15 14:21:44 INFO KafkaBatchReaderFactoryWithRowBytesAccumulator:
Creating Kafka reader topicPartition={_}{_}<topic-name>{_}{_}-0
fromOffset=16511904 untilOffset=16658164, for query
queryId=dd660d4d-05cc-4a8e-8f93-d202ec78fec3
runId=af7eb711-7310-4788-85b7-0977fc0756b7 batchId=73 taskId=143
partitionId=0{_}
_._
{_}25/04/15 15:07:21 INFO KafkaDataConsumer: From Kafka
topicPartition={_}{_}<topic-name>{_}{_}-0
groupId=spark-kafka-source-da79e0fc-8ee5-40f5-a127-7b31766b3022--1737876659-executor
read 146260 records through 4314 polls (polled out 146265 records), taking
2526471821132 nanos, during time span of 2736294068630 nanos.{_}
And this task is waiting for Kafka to respond for most of the time as we can
see from the threads:
_Executor task launch worker for task 0.0 in stage 147.0 (TID 143)_
_sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)_
_sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)_
_sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)_
_sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) - locked
sun.nio.ch.EPollSelectorImpl@54f8f9b6_
_sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)_
_kafkashaded.org.apache.kafka.common.network.Selector.select(Selector.java:874)_
_kafkashaded.org.apache.kafka.common.network.Selector.poll(Selector.java:465)_
_kafkashaded.org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)_
_kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280)_
_kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:251)_
_kafkashaded.org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)_
_kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1759)_
_kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1717)_
_org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumer.getAvailableOffsetRange(KafkaDataConsumer.scala:110)_
_org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumer.fetch(KafkaDataConsumer.scala:84)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.$anonfun$fetchData$1(KafkaDataConsumer.scala:593)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$$Lambda$4556/228899458.apply(Unknown
Source)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.timeNanos(KafkaDataConsumer.scala:696)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.fetchData(KafkaDataConsumer.scala:593)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.fetchRecord(KafkaDataConsumer.scala:517)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.$anonfun$get$1(KafkaDataConsumer.scala:325)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer$$Lambda$4491/342980175.apply(Unknown
Source)_
_org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.runUninterruptiblyIfPossible(KafkaDataConsumer.scala:686)_
_org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.get(KafkaDataConsumer.scala:301)_
_org.apache.spark.sql.kafka010.KafkaBatchPartitionReader.next(KafkaBatchPartitionReader.scala:106)_
_._
> We're facing issue in Kafka while reading data from Azure event hubs through
> Azure Databricks
> ---------------------------------------------------------------------------------------------
>
> Key: KAFKA-19403
> URL: https://issues.apache.org/jira/browse/KAFKA-19403
> Project: Kafka
> Issue Type: Bug
> Components: connect, consumer, network
> Affects Versions: 3.3.1
> Environment: Production
> Reporter: karthickthavasiraj09
> Priority: Blocker
> Labels: patch
>
> * We are experiencing significant slowness while reading data from Azure
> Event Hubs using Azure Databricks. After conducting initial analysis with the
> Microsoft support team, they confirmed that the root cause appears to be
> related to Kafka. We are reaching out for your assistance in investigating
> and resolving this issue.
> Below are the key findings and debug logs provided by the Microsoft team:
> ** The data read operation took *49 minutes* in total.
> ** Out of this, only one task(spark task) {*}Task 143 alone took 46
> minutes{*}, indicating a bottleneck in this specific task.
> ** The job duration was {*}49 minutes and 30 seconds{*}.
> Relevant Log Snippets:
> 25/04/15 14:21:44 INFO KafkaBatchReaderFactoryWithRowBytesAccumulator:
> Creating Kafka reader topicPartition=<topic-name>-0 fromOffset=16511904
> untilOffset=16658164,
> for queryId=dd660d4d-05cc-4a8e-8f93-d202ec78fec3
> runId=af7eb711-7310-4788-85b7-0977fc0756b7 batchId=73 taskId=143 partitionId=0
> 25/04/15 15:07:21 INFO KafkaDataConsumer:
> From Kafka topicPartition=<topic-name>-0
> groupId=spark-kafka-source-da79e0fc-8ee5-40f5-a127-7b31766b3022--1737876659-executor
> read 146260 records through 4314 polls (polled out 146265 records), taking
> 2526471821132 ns,
> over a timespan of 2736294068630 ns.
>
> Additionally, the thread stack trace indicates that the task was mostly
> waiting on Kafka to respond. See the following thread details captured during
> the slowness:
> Executor task launch worker for task 0.0 in stage 147.0 (TID 143)
> sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> ...
> kafkashaded.org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1759)
> ...
> org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.fetchRecord(KafkaDataConsumer.scala:517)
> *We kindly request the Kafka team to look into this issue. The task appears
> to be blocked for an extended period during Kafka polling. Any insights into
> why Kafka is not responding promptly or recommendations for configuration
> changes or optimizations would be greatly appreciated.*
> *Please let us know if any additional information or diagnostics are required
> from our end.*
> *Thank you for your support.*
--
This message was sent by Atlassian Jira
(v8.20.10#820010)